Displaying 1 - 100 of 142
  • Akamine, S., Ghaleb, E., Rasenberg, M., Fernandez, R., Meyer, A. S., & Özyürek, A. (2024). Speakers align both their gestures and words not only to establish but also to maintain reference to create shared labels for novel objects in interaction. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2435-2442).

    Abstract

    When we communicate with others, we often repeat aspects of each other's communicative behavior such as sentence structures and words. Such behavioral alignment has been mostly studied for speech or text. Yet, language use is mostly multimodal, flexibly using speech and gestures to convey messages. Here, we explore the use of alignment in speech (words) and co-speech gestures (iconic gestures) in a referential communication task aimed at finding labels for novel objects in interaction. In particular, we investigate how people flexibly use lexical and gestural alignment to create shared labels for novel objects and whether alignment in speech and gesture are related over time. The present study shows that interlocutors establish shared labels multimodally, and alignment in words and iconic gestures are used throughout the interaction. We also show that the amount of lexical alignment positively associates with the amount of gestural alignment over time, suggesting a close relationship between alignment in the vocal and manual modalities.

    Additional information

    link to eScholarship
  • Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., Ten Cate, C. J., & Vernes, S. C. (2024). Morphological diversity in the brains of 12 Neotropical Bat species. Acta Chiropterologica, 25(2), 323-338. doi:10.3161/15081109ACC2023.25.2.011.

    Abstract

    Comparative neurobiology allows us to investigate relationships between phylogeny and the brain and understand the evolution of traits. Bats constitute an attractive group of mammalian species for comparative studies, given their large diversity in behavioural phenotypes, brain morphology, and array of specialised traits. Currently, the order Chiroptera contains over 1,450 species within 21 families and spans ca. 65 million years of evolution. To date, 194 Neotropical bat species (ca. 13% of the total number of species around the world) have been recorded in Central America. This study includes qualitative and quantitative macromorphological descriptions of the brains of 12 species from six families of Neotropical bats. These analyses, which include histological neuronal staining of two species from different families (Phyllostomus hastatus and Saccopteryx bilineata), show substantial diversity in brain macromorphology including brain shape and size, exposure of mesencephalic regions, and cortical and cerebellar fissure depth. Brain macromorphology can in part be explained by phylogeny as species within the same family are more similar to each other. However, macromorphology cannot be explained by evolutionary time alone as brain differences between some phyllostomid bats are larger than between species from the family Emballonuridae despite being of comparable diverging distances in the phylogenetic tree. This suggests that faster evolutionary changes in brain morphology occurred in phyllostomids — although a larger number of species needs to be studied to confirm this. Our results show the rich diversity in brain morphology that bats provide for comparative and evolutionary studies.
  • Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., O'Toole, B. P., Formenti, G., Philge, P., Zhang, N., Abueg, L., Brajuka, N., Jarvis, E., Volkert, T. L., Gray, J. L., Pieri, M., Mai, M., Teeling, E. C., Vernes, S. C., The Bat Biology Foundation, & The Bat1K Consortium (2024). The genome sequence of Rhynchonycteris naso, Peters, 1867 (Chiroptera, Emballonuridae, Rhynchonycteris). Wellcome Open Research, 9: 361. doi:10.12688/wellcomeopenres.19959.1.

    Abstract

    We present a reference genome assembly from an individual male Rhynchonycteris naso (Chordata; Mammalia; Chiroptera; Emballonuridae). The genome sequence is 2.46 Gb in span. The majority of the assembly is scaffolded into 22 chromosomal pseudomolecules, with the Y sex chromosome assembled.
  • Alvarez van Tussenbroek, I. (2024). Neotropical bat species: An exploration of brain morphology and genetics. PhD Thesis, Leiden University, Leiden.
  • Amelink, J., Postema, M., Kong, X., Schijven, D., Carrion Castillo, A., Soheili-Nezhad, S., Sha, Z., Molz, B., Joliot, M., Fisher, S. E., & Francks, C. (2024). Imaging genetics of language network functional connectivity reveals links with language-related abilities, dyslexia and handedness. Communications Biology, 7: 1209. doi:10.1038/s42003-024-06890-3.

    Abstract

    Language is supported by a distributed network of brain regions with a particular contribution from the left hemisphere. A multi-level understanding of this network requires studying the genetic architecture of its functional connectivity and hemispheric asymmetry. We used resting state functional imaging data from 29,681 participants from the UK Biobank to measure functional connectivity between 18 left-hemisphere regions implicated in multimodal sentence-level processing, as well as their homotopic regions in the right-hemisphere, and interhemispheric connections. Multivariate genome-wide association analysis of this total network, based on common genetic variants (with population frequencies above 1%), identified 14 loci associated with network functional connectivity. Three of these loci were also associated with hemispheric differences of intrahemispheric connectivity. Polygenic dispositions to lower language-related abilities, dyslexia and left-handedness were associated with generally reduced leftward asymmetry of functional connectivity, but with some trait- and connection-specific exceptions. Exome-wide association analysis based on rare, protein-altering variants (frequencies < 1%) suggested 7 additional genes. These findings shed new light on the genetic contributions to language network connectivity and its asymmetry based on both common and rare genetic variants, and reveal genetic links to language-related traits and hemispheric dominance for hand preference.
  • Anijs, M. (2024). Networks within networks: Probing the neuronal and molecular underpinnings of language-related disorders using human cell models. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bonandrini, R., Gornetti, E., & Paulesu, E. (2024). A meta-analytical account of the functional lateralization of the reading network. Cortex, 177, 363-384. doi:10.1016/j.cortex.2024.05.015.

    Abstract

    The observation that the neural correlates of reading are left-lateralized is ubiquitous in the cognitive neuroscience and neuropsychological literature. Still, reading is served by a constellation of neural units, and the extent to which these units are consistently left-lateralized is unclear. In this regard, the functional lateralization of the fusiform gyrus is of particular interest, by virtue of its hypothesized role as a “visual word form area”. A quantitative Activation Likelihood Estimation meta-analysis was conducted on activation foci from 35 experiments investigating silent reading, and both a whole-brain and a bayesian ROI-based approach were used to assess the lateralization of the data submitted to meta-analysis. Perirolandic areas showed the highest level of left-lateralization, the fusiform cortex and the parietal cortex exhibited only a moderate pattern of left-lateralization, while in the occipital, insular cortices and in the cerebellum the lateralization turned out to be the lowest observed. The relatively limited functional lateralization of the fusiform gyrus was further explored in a regression analysis on the lateralization profile of each study. The functional lateralization of the fusiform gyrus during reading was positively associated with the lateralization of the precentral and inferior occipital gyri and negatively associated with the lateralization of the triangular portion of the inferior frontal gyrus and of the temporal pole. Overall, the present data highlight how lateralization patterns differ within the reading network. Furthermore, the present data highlight how the functional lateralization of the fusiform gyrus during reading is related to the degree of functional lateralization of other language brain areas.
  • Çetinçelik, M. (2024). A look into language: The role of visual cues in early language acquisition in the infant brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Çetinçelik, M., Jordan‐Barros, A., Rowland, C. F., & Snijders, T. M. (2024). The effect of visual speech cues on neural tracking of speech in 10‐month‐old infants. European Journal of Neuroscience, 60(6), 5381-5399. doi:10.1111/ejn.16492.

    Abstract

    While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1–1.75 and 2.5–3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

    Additional information

    supplementary materials
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.

    Abstract

    The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words.
  • Cheung, C.-Y., Kirby, S., & Raviv, L. (2024). The role of gender, social bias and personality traits in shaping linguistic accommodation: An experimental approach. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 80-82). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3587960.
  • Collins, J. (2024). Linguistic areas and prehistoric migrations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ding, R., Ten Oever, S., & Martin, A. E. (2024). Delta-band activity underlies referential meaning representation during pronoun resolution. Journal of Cognitive Neuroscience, 36(7), 1472-1492. doi:10.1162/jocn_a_02163.

    Abstract

    Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1–3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
  • Dona, L., & Schouwstra, M. (2024). Balancing regularization and variation: The roles of priming and motivatedness. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 130-133). Nijmegen: The Evolution of Language Conferences.
  • Duengen, D., Polotzek, M., O'Sullivan, E., & Ravignani, A. (2024). Anecdotal observations of socially learned vocalizations in harbor seals. Animal Behavior and Cognition, 11, 393-403. doi:10.26451/abc.11.03.04.2024.

    Abstract

    Harbor seals (Phoca vitulina) are more solitary than many other pinnipeds. Yet, they are capable of vocal learning, a form of social learning. Most extant literature examines social animals when investigating social learning, despite sociality not being a prerequisite. Here, we report two formerly silent harbor seals who initiated vocalizations, after having repeatedly observed a conspecific receiving food rewards for vocalizing. Our observations suggest both social and vocal learning in a group of captive harbor seals, a species that lives semi-solitarily in the wild. We propose that, in this case, social learning acted as a shortcut to acquiring food rewards compared to the comparatively costly asocial learning.
  • Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Vocal usage learning and vocal comprehension learning in harbor seals. BMC Neuroscience, 25: 48. doi:10.1186/s12868-024-00899-4.

    Abstract

    Background

    Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning.

    Results

    We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning.

    Conclusions

    It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning.
  • Eekhof, L. S. (2024). Reading the mind: The relationship between social cognition and narrative processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eekhof, L. S., & Mar, R. A. (2024). Does reading about fictional minds make us more curious about real ones? Language and Cognition, 16(1), 176-196. doi:10.1017/langcog.2023.30.

    Abstract

    Although there is a large body of research assessing whether exposure to narratives boosts social cognition immediately afterward, not much research has investigated the underlying mechanism of this putative effect. This experiment investigates the possibility that reading a narrative increases social curiosity directly afterward, which might explain the short-term boosts in social cognition reported by some others. We developed a novel measure of state social curiosity and collected data from participants (N = 222) who were randomly assigned to read an excerpt of narrative fiction or expository nonfiction. Contrary to our expectations, we found that those who read a narrative exhibited less social curiosity afterward than those who read an expository text. This result was not moderated by trait social curiosity. An exploratory analysis uncovered that the degree to which texts present readers with social targets predicted less social curiosity. Our experiment demonstrates that reading narratives, or possibly texts with social content in general, may engage and fatigue social-cognitive abilities, causing a temporary decrease in social curiosity. Such texts might also temporarily satisfy the need for social connection, temporarily reducing social curiosity. Both accounts are in line with theories describing how narratives result in better social cognition over the long term.
  • He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology, 77(8), 1745-1769. doi:10.1177/17470218231219971.

    Abstract

    Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement.
  • Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.

    Abstract

    * Ole Goltermann and Gökberk Alagöz contributed equally.
    Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution.

    Additional information

    supplementary data link to preprint
  • Goncharova, M. V., Jadoul, Y., Reichmuth, C., Fitch, W. T., & Ravignani, A. (2024). Vocal tract dynamics shape the formant structure of conditioned vocalizations in a harbor seal. Annals of the New York Academy of Sciences, 1538(1), 107-116. doi:10.1111/nyas.15189.

    Abstract

    Formants, or resonance frequencies of the upper vocal tract, are an essential part of acoustic communication. Articulatory gestures—such as jaw, tongue, lip, and soft palate movements—shape formant structure in human vocalizations, but little is known about how nonhuman mammals use those gestures to modify formant frequencies. Here, we report a case study with an adult male harbor seal trained to produce an arbitrary vocalization composed of multiple repetitions of the sound wa. We analyzed jaw movements frame-by-frame and matched them to the tracked formant modulation in the corresponding vocalizations. We found that the jaw opening angle was strongly correlated with the first (F1) and, to a lesser degree, with the second formant (F2). F2 variation was better explained by the jaw angle opening when the seal was lying on his back rather than on the belly, which might derive from soft tissue displacement due to gravity. These results show that harbor seals share some common articulatory traits with humans, where the F1 depends more on the jaw position than F2. We propose further in vivo investigations of seals to further test the role of the tongue on formant modulation in mammalian sound production.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.

    Abstract

    To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.

    Abstract

    We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.
  • Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Kocsis, K., Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Harbour seals use rhythmic percussive signalling in interaction and display. Animal Behaviour, 207, 223-234. doi:10.1016/j.anbehav.2023.09.014.

    Abstract

    Multimodal rhythmic signalling abounds across animal taxa. Studying its mechanisms and functions can highlight adaptive components in highly complex rhythmic behaviours, like dance and music. Pinnipeds, such as the harbour seal, Phoca vitulina, are excellent comparative models to assess rhythmic capacities. Harbour seals engage in rhythmic percussive behaviours which, until now, have not been described in detail. In our study, eight zoo-housed harbour seals (two pups, two juveniles and four adults) were passively monitored by audio and video during their pupping/breeding season. All juvenile and adult animals performed percussive signalling with their fore flippers in agonistic conditions, both on land and in water. Flipper slap sequences produced on the ground or on the seals' bodies were often highly regular in their interval duration, that is, were quasi-isochronous, at a 200–600 beats/min pace. Three animals also showed significant lateralization in slapping. In contrast to slapping on land, display slapping in water, performed only by adult males, showed slower tempo by one order of magnitude, and a rather motivic temporal structure. Our work highlights that percussive communication is a significant part of harbour seals' behavioural repertoire. We hypothesize that its forms of rhythm production may reflect adaptive functions such as regulating internal states and advertising individual traits.
  • Koutamanis, E. (2024). Spreading the word: Cross-linguistic influence in the bilingual child's lexicon. PhD Thesis, Radboud University, Nijmegen.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cognate facilitation in single- and dual-language contexts in bilingual children’s word processing. Linguistic Approaches to Bilingualism, 14(4), 577-608. doi:10.1075/lab.23009.kou.

    Abstract

    We examined the extent to which cognate facilitation effects occurred in simultaneous bilingual children’s production and comprehension and how these were modulated by language dominance and language context. Bilingual Dutch-German children, ranging from Dutch-dominant to German-dominant, performed picture naming and auditory lexical decision tasks in single-language and dual-language contexts. Language context was manipulated with respect to the language of communication (with the experimenter and in instructional videos) and by means of proficiency tasks. Cognate facilitation effects emerged in both production and comprehension and interacted with both dominance and context. In a single-language context, stronger cognate facilitation effects were found for picture naming in children’s less dominant language, in line with previous studies on individual differences in lexical activation. In the dual-language context, this pattern was reversed, suggesting inhibition of the dominant language at the decision level. Similar effects were observed in lexical decision. These findings provide evidence for an integrated bilingual lexicon in simultaneous bilingual children and shed more light on the complex interplay between lexicon-internal and lexicon-external factors modulating the extent of lexical cross-linguistic influence more generally.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cross-linguistic influence in the simultaneous bilingual child's lexicon: An eye-tracking and primed picture selection study. Bilingualism: Language and Cognition, 27(3), 377-387. doi:10.1017/S136672892300055X.

    Abstract

    In a between-language lexical priming study, we examined to what extent the two languages in a simultaneous bilingual child's lexicon interact, while taking individual differences in language exposure into account. Primary-school-aged Dutch–Greek bilinguals performed a primed picture selection task combined with eye-tracking. They matched pictures to auditorily presented Dutch target words preceded by Greek prime words. Their reaction times and eye movements were recorded. We tested for effects of between-language phonological priming, translation priming, and phonological priming through translation. Priming effects emerged in reaction times and eye movements in all three conditions, at different stages of processing, and unaffected by language exposure. These results extend previous findings for bilingual toddlers and bilingual adults. Processing similarities between these populations indicate that, across different stages of development, bilinguals have an integrated lexicon that is accessed in a language-nonselective way and is susceptible to interactions within and between different types of lexical representation.
  • Mamus, E. (2024). Perceptual experience shapes how blind and sighted people express concepts in multimodal language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mazzini, S., Yadnik, S., Timmers, I., Rubio-Gozalbo, E., & Jansma, B. M. (2024). Altered neural oscillations in classical galactosaemia during sentence production. Journal of Inherited Metabolic Disease, 47(4), 575-833. doi:10.1002/jimd.12740.

    Abstract

    Classical galactosaemia (CG) is a hereditary disease in galactose metabolism that despite dietary treatment is characterized by a wide range of cognitive deficits, among which is language production. CG brain functioning has been studied with several neuroimaging techniques, which revealed both structural and functional atypicalities. In the present study, for the first time, we compared the oscillatory dynamics, especially the power spectrum and time–frequency representations (TFR), in the electroencephalography (EEG) of CG patients and healthy controls while they were performing a language production task. Twenty-one CG patients and 19 healthy controls described animated scenes, either in full sentences or in words, indicating two levels of complexity in syntactic planning. Based on previous work on the P300 event related potential (ERP) and its relation with theta frequency, we hypothesized that the oscillatory activity of patients and controls would differ in theta power and TFR. With regard to behavior, reaction times showed that patients are slower, reflecting the language deficit. In the power spectrum, we observed significant higher power in patients in delta (1–3 Hz), theta (4–7 Hz), beta (15–30 Hz) and gamma (30–70 Hz) frequencies, but not in alpha (8–12 Hz), suggesting an atypical oscillatory profile. The time-frequency analysis revealed significantly weaker event-related theta synchronization (ERS) and alpha desynchronization (ERD) in patients in the sentence condition. The data support the hypothesis that CG language difficulties relate to theta–alpha brain oscillations.

    Additional information

    table S1 and S2
  • Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.

    Abstract

    Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.

    Additional information

    supplementary material
  • Mooijman, S. (2024). Control of language in bilingual speakers with and without aphasia. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mooijman, S., Schoonen, R., Roelofs, A., & Ruiter, M. B. (2024). Benefits of free language choice in bilingual individuals with aphasia. Aphasiology, 38(11), 1793-1831. doi:10.1080/02687038.2024.2326239.

    Abstract

    Background

    Forced switching between languages poses demands on control abilities, which may be difficult to meet for bilinguals with aphasia. Freely choosing languages has been shown to increase naming efficiency in healthy bilinguals, and lexical accessibility was found to be a predictor for language choice. The overlap between bilingual language switching and other types of switching is yet unclear.

    Aims

    This study aimed to examine the benefits of free language choice for bilinguals with aphasia and to investigate the overlap of between- and within-language switching abilities.

    Methods & Procedures

    Seventeen bilinguals with aphasia completed a questionnaire and four web-based picture naming tasks: single-language naming in the first and second language separately; voluntary switching between languages; cued and predictable switching between languages; cued and predictable switching between phrase types in the first language. Accuracy and naming latencies were analysed using (generalised) linear mixed-effects models.

    Outcomes & Results

    The results showed higher accuracy and faster naming for the voluntary switching condition compared to single-language naming and cued switching. Both voluntary and cued language switching yielded switch costs, and voluntary switch costs were larger. Ease of lexical access was a reliable predictor for voluntary language choice. We obtained no statistical evidence for differences or associations between switch costs in between- and within-language switching.

    Conclusions

    Several results point to benefits of voluntary language switching for bilinguals with aphasia. Freely mixing languages improved naming accuracy and speed, and ease of lexical access affected language choice. There was no statistical evidence for overlap of between- and within-language switching abilities. This study highlights the benefits of free language choice for bilinguals with aphasia.
  • Mooijman, S., Schoonen, R., Ruiter, M. B., & Roelofs, A. (2024). Voluntary and cued language switching in late bilingual speakers. Bilingualism: Language and Cognition, 27(4), 610-627. doi:10.1017/S1366728923000755.

    Abstract

    Previous research examining the factors that determine language choice and voluntary switching mainly involved early bilinguals. Here, using picture naming, we investigated language choice and switching in late Dutch–English bilinguals. We found that naming was overall slower in cued than in voluntary switching, but switch costs occurred in both types of switching. The magnitude of switch costs differed depending on the task and language, and was moderated by L2 proficiency. Self-rated rather than objectively assessed proficiency predicted voluntary switching and ease of lexical access was associated with language choice. Between-language and within-language switch costs were not correlated. These results highlight self-rated proficiency as a reliable predictor of voluntary switching, with language modulating switch costs. As in early bilinguals, ease of lexical access was related to word-level language choice of late bilinguals.
  • Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2024). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review, 31, 849-861. doi:10.3758/s13423-023-02384-1.

    Abstract

    * These two authors contributed equally to this study
    Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
    language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.

    Additional information

    supplementary file
  • Peirolo, M., Meyer, A. S., & Frances, C. (2024). Investigating the causes of prosodic marking in self-repairs: An automatic process? In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 1080-1084). doi:10.21437/SpeechProsody.2024-218.

    Abstract

    Natural speech involves repair. These repairs are often highlighted through prosodic marking (Levelt & Cutler, 1983). Prosodic marking usually entails an increase in pitch, loudness, and/or duration that draws attention to the corrected word. While it is established that natural self-repairs typically elicit prosodic marking, the exact cause of this is unclear. This study investigates whether producing a prosodic marking emerges from an automatic correction process or has a communicative purpose. In the current study, we elicit corrections to test whether all self-corrections elicit prosodic marking. Participants carried out a picture-naming task in which they described two images presented on-screen. To prompt self-correction, the second image was altered in some cases, requiring participants to abandon their initial utterance and correct their description to match the new image. This manipulation was compared to a control condition in which only the orientation of the object would change, eliciting no self-correction while still presenting a visual change. We found that the replacement of the item did not elicit a prosodic marking, regardless of the type of change. Theoretical implications and research directions are discussed, in particular theories of prosodic planning.
  • Quaresima, A. (2024). A Bridge not too far: Neurobiological causal models of word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • de Reus, K., Benítez-Burraco, A., Hersh, T. A., Groot, N., Lambert, M. L., Slocombe, K. E., Vernes, S. C., & Raviv, L. (2024). Self-domestication traits in vocal learning mammals. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 105-108). Nijmegen: The Evolution of Language Conferences.
  • Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2024). The timing of beat gestures affects lexical stress perception in Spanish. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings Speech Prosody 2024 (pp. 702-706). doi:10.21437/SpeechProsody.2024-142.

    Abstract

    It has been shown that when speakers produce hand gestures, addressees are attentive towards these gestures, using them to facilitate speech processing. Even relatively simple “beat” gestures are taken into account to help process aspects of speech such as prosodic prominence. In fact, recent evidence suggests that the timing of a beat gesture can influence spoken word recognition. Termed the manual McGurk Effect, Dutch participants, when presented with lexical stress minimal pair continua in Dutch, were biased to hear lexical stress on the syllable that coincided with a beat gesture. However, little is known about how this manual McGurk effect would surface in languages other than Dutch, with different acoustic cues to prominence, and variable gestures. Therefore, this study tests the effect in Spanish where lexical stress is arguably even more important, being a contrastive cue in the regular verb conjugation system. Results from 24 participants corroborate the effect in Spanish, namely that when given the same auditory stimulus, participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture. These findings extend the manual McGurk effect to a different language, emphasizing the impact of gestures' timing on prosody perception and spoken word recognition.
  • Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.

    Abstract

    Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system.
  • Sander, J., Çetinçelik, M., Zhang, Y., Rowland, C. F., & Harmon, Z. (2024). Why does joint attention predict vocabulary acquisition? The answer depends on what coding scheme you use. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1607-1613).

    Abstract

    Despite decades of study, we still know less than we would like about the association between joint attention (JA) and language acquisition. This is partly because of disagreements on how to operationalise JA. In this study, we examine the impact of applying two different, influential JA operationalisation schemes to the same dataset of child-caregiver interactions, to determine which yields a better fit to children's later vocabulary size. Two coding schemes— one defining JA in terms of gaze overlap and one in terms of social aspects of shared attention—were applied to video-recordings of dyadic naturalistic toy-play interactions (N=45). We found that JA was predictive of later production vocabulary when operationalised as shared focus (study 1), but also that its operationalisation as shared social awareness increased its predictive power (study 2). Our results emphasise the critical role of methodological choices in understanding how and why JA is associated with vocabulary size.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.

    Abstract

    Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time.
  • Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.

    Abstract

    When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.

    Additional information

    supplementary data
  • Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Titus, A., & Peeters, D. (2024). Multilingualism at the market: A pre-registered immersive virtual reality study of bilingual language switching. Journal of Cognition, 7(1), 24-35. doi:10.5334/joc.359.

    Abstract

    Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals’ capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.

    Abstract

    Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
  • Byun, K.-S., De Vos, C., Bradford, A., Zeshan, U., & Levinson, S. C. (2018). First encounters: Repair sequences in cross-signing. Topics in Cognitive Science, 10(2), 314-334. doi:10.1111/tops.12303.

    Abstract

    Most human communication is between people who speak or sign the same languages. Nevertheless, communication is to some extent possible where there is no language in common, as every tourist knows. How this works is of some theoretical interest (Levinson 2006). A nice arena to explore this capacity is when deaf signers of different languages meet for the first time, and are able to use the iconic affordances of sign to begin communication. Here we focus on Other-Initiated Repair (OIR), that is, where one signer makes clear he or she does not understand, thus initiating repair of the prior conversational turn. OIR sequences are typically of a three-turn structure (Schegloff 2007) including the problem source turn (T-1), the initiation of repair (T0), and the turn offering a problem solution (T+1). These sequences seem to have a universal structure (Dingemanse et al. 2013). We find that in most cases where such OIR occur, the signer of the troublesome turn (T-1) foresees potential difficulty, and marks the utterance with 'try markers' (Sacks & Schegloff 1979, Moerman 1988) which pause to invite recognition. The signers use repetition, gestural holds, prosodic lengthening and eyegaze at the addressee as such try-markers. Moreover, when T-1 is try-marked this allows for faster response times of T+1 with respect to T0. This finding suggests that signers in these 'first encounter' situations actively anticipate potential trouble and, through try-marking, mobilize and facilitate OIRs. The suggestion is that heightened meta-linguistic awareness can be utilized to deal with these problems at the limits of our communicational ability.
  • Byun, K.-S., De Vos, C., Roberts, S. G., & Levinson, S. C. (2018). Interactive sequences modulate the selection of expressive forms in cross-signing. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 67-69). Toruń, Poland: NCU Press. doi:10.12775/3991-1.012.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2018). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. In I. Nikolaeva (Ed.), Linguistic Typology: Critical Concepts in Linguistics. Vol. 4 (pp. 322-357). London: Routledge.

    Abstract

    In conversation, people regularly deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Drozdova, P. (2018). The effects of nativeness and background noise on the perceptual learning of voices and ambiguous sounds. PhD Thesis, Radboud University, Nijmegen.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Estruch, S. B. (2018). Characterization of transcription factors in monogenic disorders of speech and language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Franken, M. K. (2018). Listening for speaking: Investigations of the relationship between speech perception and production. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    Speaking and listening are complex tasks that we perform on a daily basis, almost without conscious effort. Interestingly, speaking almost never occurs without listening: whenever we speak, we at least hear our own speech. The research in this thesis is concerned with how the perception of our own speech influences our speaking behavior. We show that unconsciously, we actively monitor this auditory feedback of our own speech. This way, we can efficiently take action and adapt articulation when an error occurs and auditory feedback does not correspond to our expectation. Processing the auditory feedback of our speech does not, however, automatically affect speech production. It is subject to a number of constraints. For example, we do not just track auditory feedback, but also its consistency. If auditory feedback is more consistent over time, it has a stronger influence on speech production. In addition, we investigated how auditory feedback during speech is processed in the brain, using magnetoencephalography (MEG). The results suggest the involvement of a broad cortical network including both auditory and motor-related regions. This is consistent with the view that the auditory center of the brain is involved in comparing auditory feedback to our expectation of auditory feedback. If this comparison yields a mismatch, motor-related regions of the brain can be recruited to alter the ongoing articulations.

    Additional information

    full text via Radboud Repository
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hoey, E., & Kendrick, K. H. (2018). Conversation analysis. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 151-173). Hoboken: Wiley.

    Abstract

    Conversation Analysis (CA) is an inductive, micro-analytic, and predominantly qualitative
    method for studying human social interactions. This chapter describes and illustrates the basic
    methods of CA. We first situate the method by describing its sociological foundations, key areas
    of analysis, and particular approach in using naturally occurring data. The bulk of the chapter is
    devoted to practical explanations of the typical conversation analytic process for collecting data
    and producing an analysis. We analyze a candidate interactional practice – the assessmentimplicative
    interrogative – using real data extracts as a demonstration of the method, explicitly
    laying out the relevant questions and considerations for every stage of an analysis. The chapter
    concludes with some discussion of quantitative approaches to conversational interaction, and
    links between CA and psycholinguistic concerns
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Huisman, J. L. A., & Majid, A. (2018). Psycholinguistic variables matter in odor naming. Memory & Cognition, 46, 577-588. doi:10.3758/s13421-017-0785-1.

    Abstract

    People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Agent model reveals the influence of vocal tract anatomy on speech during ontogeny and glossogeny. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 171-174). Toruń, Poland: NCU Press. doi:10.12775/3991-1.042.
  • Janssen, R., & Dediu, D. (2018). Genetic biases affecting language: What do computer models and experimental approaches suggest? In T. Poibeau, & A. Villavicencio (Eds.), Language, Cognition and Computational Models (pp. 256-288). Cambridge: Cambridge University Press.

    Abstract

    Computer models of cultural evolution have shown language properties emerging on interacting agents with a brain that lacks dedicated, nativist language modules. Notably, models using Bayesian agents provide a precise specification of (extra-)liguististic factors (e.g., genetic) that shape language through iterated learning (biases on language), and demonstrate that weak biases get expressed more strongly over time (bias amplification). Other models attempt to lessen assumption on agents’ innate predispositions even more, and emphasize self-organization within agents, highlighting glossogenesis (the development of language from a nonlinguistic state). Ultimately however, one also has to recognize that biology and culture are strongly interacting, forming a coevolving system. As such, computer models show that agents might (biologically) evolve to a state predisposed to language adaptability, where (culturally) stable language features might get assimilated into the genome via Baldwinian niche construction. In summary, while many questions about language evolution remain unanswered, it is clear that it is not to be completely understood from a purely biological, cognitivist perspective. Language should be regarded as (partially) emerging on the social interactions between large populations of speakers. In this context, agent models provide a sound approach to investigate the complex dynamics of genetic biasing on language and speech
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janssen, R. (2018). Let the agents do the talking: On the influence of vocal tract anatomy no speech during ontogeny. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kirsch, J. (2018). Listening for the WHAT and the HOW: Older adults' processing of semantic and affective information in speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Koch, X. (2018). Age and hearing loss effects on speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kochari, A. R., & Ostarek, M. (2018). Introducing a replication-first rule for PhD projects (commmentary on Zwaan et al., ‘Making replication mainstream’). Behavioral and Brain Sciences, 41: e138. doi:10.1017/S0140525X18000730.

    Abstract

    Zwaan et al. mention that young researchers should conduct replications as a
    small part of their portfolio. We extend this proposal and suggest that conducting and
    reporting replications should become an integral part of PhD projects and be taken into
    account in their assessment. We discuss how this would help not only scientific
    advancement, but also PhD candidates’ careers.
  • Kolipakam, V. (2018). A holistic approach to understanding pre-history. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kolipakam, V., Jordan, F., Dunn, M., Greenhill, S. J., Bouckaert, R., Gray, R. D., & Verkerk, A. (2018). A Bayesian phylogenetic study of the Dravidian language family. Royal Society Open Science, 5: 171504. doi:10.1098/rsos.171504.

    Abstract

    The Dravidian language family consists of about 80 varieties (Hammarström H. 2016 Glottolog 2.7) spoken by 220 million people across southern and central India and surrounding countries (Steever SB. 1998 In The Dravidian languages (ed. SB Steever), pp. 1–39: 1). Neither the geographical origin of the Dravidian language homeland nor its exact dispersal through time are known. The history of these languages is crucial for understanding prehistory in Eurasia, because despite their current restricted range, these languages played a significant role in influencing other language groups including Indo-Aryan (Indo-European) and Munda (Austroasiatic) speakers. Here, we report the results of a Bayesian phylogenetic analysis of cognate-coded lexical data, elicited first hand from native speakers, to investigate the subgrouping of the Dravidian language family, and provide dates for the major points of diversification. Our results indicate that the Dravidian language family is approximately 4500 years old, a finding that corresponds well with earlier linguistic and archaeological studies. The main branches of the Dravidian language family (North, Central, South I, South II) are recovered, although the placement of languages within these main branches diverges from previous classifications. We find considerable uncertainty with regard to the relationships between the main branches.
  • Kouwenhoven, H., Ernestus, M., & Van Mulken, M. (2018). Register variation by Spanish users of English. The Nijmegen Corpus of Spanish English. Corpus Linguistics and Linguistic Theory, 14(1), 35-63. doi:10.1515/cllt-2013-0054.

    Abstract

    English serves as a lingua franca in situations with varying degrees of
    formality. How formality affects non-native speech has rarely been studied. We
    investigated register variation by Spanish users of English by comparing formal
    and informal speech from the Nijmegen Corpus of Spanish English that we
    created. This corpus comprises speech from thirty-four Spanish speakers of
    English in interaction with Dutch confederates in two speech situations.
    Formality affected the amount of laughter and overlapping speech and the
    number of Spanish words. Moreover, formal speech had a more informational
    character than informal speech. We discuss how our findings relate to register
    variation in Spanish

    Files private

    Request files
  • Kung, C. (2018). Speech comprehension in a tone language: The role of lexical tone, context, and intonation in Cantonese-Chinese. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lattenkamp, E. Z., & Vernes, S. C. (2018). Vocal learning: A language-relevant trait in need of a broad cross-species approach. Current Opinion in Behavioral Sciences, 21, 209-215. doi:10.1016/j.cobeha.2018.04.007.

    Abstract

    Although humans are unmatched in their capacity to produce
    speech and learn language, comparative approaches in diverse
    animalmodelsareabletoshedlightonthebiologicalunderpinnings
    of language-relevant traits. In the study of vocal learning, a trait
    crucial for spoken language, passerine birds have been the
    dominant models, driving invaluable progress in understanding the
    neurobiology and genetics of vocal learning despite being only
    distantly related to humans. To date, there is sparse evidence that
    our closest relatives, nonhuman primates have the capability to
    learn new vocalisations. However, a number of other mammals
    have shown the capacity for vocal learning, such as some
    cetaceans, pinnipeds, elephants, and bats, and we anticipate that
    with further study more species will gain membership to this
    (currently) select club. A broad, cross-species comparison of vocal
    learning, coupled with careful consideration of the components
    underlying this trait, is crucial to determine how human speech and
    spoken language is biologically encoded and how it evolved. We
    emphasise the need to draw on the pool of promising species that
    havethusfarbeenunderstudiedorneglected.Thisisbynomeansa
    call for fewer studies in songbirds, or an unfocused treasure-hunt,
    but rather an appeal for structured comparisons across a range of
    species, considering phylogenetic relationships, ecological and
    morphological constrains, developmental and social factors, and
    neurogenetic underpinnings. Herein, we promote a comparative
    approachhighlightingtheimportanceofstudyingvocallearningina
    broad range of model species, and describe a common framework
    for targeted cross-taxon studies to shed light on the biology and
    evolution of vocal learning.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Volitional control of social vocalisations and vocal usage learning in bats. Journal of Experimental Biology, 221(14): jeb.180729. doi:10.1242/jeb.180729.

    Abstract

    Bats are gregarious, highly vocal animals that possess a broad repertoire of social vocalisations. For in-depth studies of their vocal behaviours, including vocal flexibility and vocal learning, it is necessary to gather repeatable evidence from controlled laboratory experiments on isolated individuals. However, such studies are rare for one simple reason: eliciting social calls in isolation and under operant control is challenging and has rarely been achieved. To overcome this limitation, we designed an automated setup that allows conditioning of social vocalisations in a new context, and tracks spectro-temporal changes in the recorded calls over time. Using this setup, we were able to reliably evoke social calls from temporarily isolated lesser spear-nosed bats (Phyllostomus discolor). When we adjusted the call criteria that could result in food reward, bats responded by adjusting temporal and spectral call parameters. This was achieved without the help of an auditory template or social context to direct the bats. Our results demonstrate vocal flexibility and vocal usage learning in bats. Our setup provides a new paradigm that allows the controlled study of the production and learning of social vocalisations in isolated bats, overcoming limitations that have, until now, prevented in-depth studies of these behaviours.

    Additional information

    JEB180729supp.pdf

Share this page