Displaying 1 - 100 of 143
-
Akamine, S., Ghaleb, E., Rasenberg, M., Fernandez, R., Meyer, A. S., & Özyürek, A. (2024). Speakers align both their gestures and words not only to establish but also to maintain reference to create shared labels for novel objects in interaction. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2435-2442).Abstract
When we communicate with others, we often repeat aspects of each other's communicative behavior such as sentence structures and words. Such behavioral alignment has been mostly studied for speech or text. Yet, language use is mostly multimodal, flexibly using speech and gestures to convey messages. Here, we explore the use of alignment in speech (words) and co-speech gestures (iconic gestures) in a referential communication task aimed at finding labels for novel objects in interaction. In particular, we investigate how people flexibly use lexical and gestural alignment to create shared labels for novel objects and whether alignment in speech and gesture are related over time. The present study shows that interlocutors establish shared labels multimodally, and alignment in words and iconic gestures are used throughout the interaction. We also show that the amount of lexical alignment positively associates with the amount of gestural alignment over time, suggesting a close relationship between alignment in the vocal and manual modalities.Additional information
link to eScholarship -
Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., Ten Cate, C. J., & Vernes, S. C. (2024). Morphological diversity in the brains of 12 Neotropical Bat species. Acta Chiropterologica, 25(2), 323-338. doi:10.3161/15081109ACC2023.25.2.011.
Abstract
Comparative neurobiology allows us to investigate relationships between phylogeny and the brain and understand the evolution of traits. Bats constitute an attractive group of mammalian species for comparative studies, given their large diversity in behavioural phenotypes, brain morphology, and array of specialised traits. Currently, the order Chiroptera contains over 1,450 species within 21 families and spans ca. 65 million years of evolution. To date, 194 Neotropical bat species (ca. 13% of the total number of species around the world) have been recorded in Central America. This study includes qualitative and quantitative macromorphological descriptions of the brains of 12 species from six families of Neotropical bats. These analyses, which include histological neuronal staining of two species from different families (Phyllostomus hastatus and Saccopteryx bilineata), show substantial diversity in brain macromorphology including brain shape and size, exposure of mesencephalic regions, and cortical and cerebellar fissure depth. Brain macromorphology can in part be explained by phylogeny as species within the same family are more similar to each other. However, macromorphology cannot be explained by evolutionary time alone as brain differences between some phyllostomid bats are larger than between species from the family Emballonuridae despite being of comparable diverging distances in the phylogenetic tree. This suggests that faster evolutionary changes in brain morphology occurred in phyllostomids — although a larger number of species needs to be studied to confirm this. Our results show the rich diversity in brain morphology that bats provide for comparative and evolutionary studies. -
Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., O'Toole, B. P., Formenti, G., Philge, P., Zhang, N., Abueg, L., Brajuka, N., Jarvis, E., Volkert, T. L., Gray, J. L., Pieri, M., Mai, M., Teeling, E. C., Vernes, S. C., The Bat Biology Foundation, & The Bat1K Consortium (2024). The genome sequence of Rhynchonycteris naso, Peters, 1867 (Chiroptera, Emballonuridae, Rhynchonycteris). Wellcome Open Research, 9: 361. doi:10.12688/wellcomeopenres.19959.1.
Abstract
We present a reference genome assembly from an individual male Rhynchonycteris naso (Chordata; Mammalia; Chiroptera; Emballonuridae). The genome sequence is 2.46 Gb in span. The majority of the assembly is scaffolded into 22 chromosomal pseudomolecules, with the Y sex chromosome assembled. -
Alvarez van Tussenbroek, I. (2024). Neotropical bat species: An exploration of brain morphology and genetics. PhD Thesis, Leiden University, Leiden.
-
Amelink, J., Postema, M., Kong, X., Schijven, D., Carrion Castillo, A., Soheili-Nezhad, S., Sha, Z., Molz, B., Joliot, M., Fisher, S. E., & Francks, C. (2024). Imaging genetics of language network functional connectivity reveals links with language-related abilities, dyslexia and handedness. Communications Biology, 7: 1209. doi:10.1038/s42003-024-06890-3.
Abstract
Language is supported by a distributed network of brain regions with a particular contribution from the left hemisphere. A multi-level understanding of this network requires studying the genetic architecture of its functional connectivity and hemispheric asymmetry. We used resting state functional imaging data from 29,681 participants from the UK Biobank to measure functional connectivity between 18 left-hemisphere regions implicated in multimodal sentence-level processing, as well as their homotopic regions in the right-hemisphere, and interhemispheric connections. Multivariate genome-wide association analysis of this total network, based on common genetic variants (with population frequencies above 1%), identified 14 loci associated with network functional connectivity. Three of these loci were also associated with hemispheric differences of intrahemispheric connectivity. Polygenic dispositions to lower language-related abilities, dyslexia and left-handedness were associated with generally reduced leftward asymmetry of functional connectivity, but with some trait- and connection-specific exceptions. Exome-wide association analysis based on rare, protein-altering variants (frequencies < 1%) suggested 7 additional genes. These findings shed new light on the genetic contributions to language network connectivity and its asymmetry based on both common and rare genetic variants, and reveal genetic links to language-related traits and hemispheric dominance for hand preference. -
Anijs, M. (2024). Networks within networks: Probing the neuronal and molecular underpinnings of language-related disorders using human cell models. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Bonandrini, R., Gornetti, E., & Paulesu, E. (2024). A meta-analytical account of the functional lateralization of the reading network. Cortex, 177, 363-384. doi:10.1016/j.cortex.2024.05.015.
Abstract
The observation that the neural correlates of reading are left-lateralized is ubiquitous in the cognitive neuroscience and neuropsychological literature. Still, reading is served by a constellation of neural units, and the extent to which these units are consistently left-lateralized is unclear. In this regard, the functional lateralization of the fusiform gyrus is of particular interest, by virtue of its hypothesized role as a “visual word form area”. A quantitative Activation Likelihood Estimation meta-analysis was conducted on activation foci from 35 experiments investigating silent reading, and both a whole-brain and a bayesian ROI-based approach were used to assess the lateralization of the data submitted to meta-analysis. Perirolandic areas showed the highest level of left-lateralization, the fusiform cortex and the parietal cortex exhibited only a moderate pattern of left-lateralization, while in the occipital, insular cortices and in the cerebellum the lateralization turned out to be the lowest observed. The relatively limited functional lateralization of the fusiform gyrus was further explored in a regression analysis on the lateralization profile of each study. The functional lateralization of the fusiform gyrus during reading was positively associated with the lateralization of the precentral and inferior occipital gyri and negatively associated with the lateralization of the triangular portion of the inferior frontal gyrus and of the temporal pole. Overall, the present data highlight how lateralization patterns differ within the reading network. Furthermore, the present data highlight how the functional lateralization of the fusiform gyrus during reading is related to the degree of functional lateralization of other language brain areas. -
Çetinçelik, M. (2024). A look into language: The role of visual cues in early language acquisition in the infant brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Çetinçelik, M., Jordan‐Barros, A., Rowland, C. F., & Snijders, T. M. (2024). The effect of visual speech cues on neural tracking of speech in 10‐month‐old infants. European Journal of Neuroscience, 60(6), 5381-5399. doi:10.1111/ejn.16492.
Abstract
While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1–1.75 and 2.5–3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.Additional information
supplementary materials -
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.
Abstract
The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words. -
Cheung, C.-Y., Kirby, S., & Raviv, L. (2024). The role of gender, social bias and personality traits in shaping linguistic accommodation: An experimental approach. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 80-82). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3587960. -
Collins, J. (2024). Linguistic areas and prehistoric migrations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ding, R., Ten Oever, S., & Martin, A. E. (2024). Delta-band activity underlies referential meaning representation during pronoun resolution. Journal of Cognitive Neuroscience, 36(7), 1472-1492. doi:10.1162/jocn_a_02163.
Abstract
Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1–3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns. -
Dona, L., & Schouwstra, M. (2024). Balancing regularization and variation: The roles of priming and motivatedness. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 130-133). Nijmegen: The Evolution of Language Conferences. -
Duengen, D., Polotzek, M., O'Sullivan, E., & Ravignani, A. (2024). Anecdotal observations of socially learned vocalizations in harbor seals. Animal Behavior and Cognition, 11, 393-403. doi:10.26451/abc.11.03.04.2024.
Abstract
Harbor seals (Phoca vitulina) are more solitary than many other pinnipeds. Yet, they are capable of vocal learning, a form of social learning. Most extant literature examines social animals when investigating social learning, despite sociality not being a prerequisite. Here, we report two formerly silent harbor seals who initiated vocalizations, after having repeatedly observed a conspecific receiving food rewards for vocalizing. Our observations suggest both social and vocal learning in a group of captive harbor seals, a species that lives semi-solitarily in the wild. We propose that, in this case, social learning acted as a shortcut to acquiring food rewards compared to the comparatively costly asocial learning.Additional information
Duengen_etal_2024_anecdotal oberservations of....pdf -
Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Vocal usage learning and vocal comprehension learning in harbor seals. BMC Neuroscience, 25: 48. doi:10.1186/s12868-024-00899-4.
Abstract
Background
Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning.
Results
We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning.
Conclusions
It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning. -
Eekhof, L. S. (2024). Reading the mind: The relationship between social cognition and narrative processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Eekhof, L. S., & Mar, R. A. (2024). Does reading about fictional minds make us more curious about real ones? Language and Cognition, 16(1), 176-196. doi:10.1017/langcog.2023.30.
Abstract
Although there is a large body of research assessing whether exposure to narratives boosts social cognition immediately afterward, not much research has investigated the underlying mechanism of this putative effect. This experiment investigates the possibility that reading a narrative increases social curiosity directly afterward, which might explain the short-term boosts in social cognition reported by some others. We developed a novel measure of state social curiosity and collected data from participants (N = 222) who were randomly assigned to read an excerpt of narrative fiction or expository nonfiction. Contrary to our expectations, we found that those who read a narrative exhibited less social curiosity afterward than those who read an expository text. This result was not moderated by trait social curiosity. An exploratory analysis uncovered that the degree to which texts present readers with social targets predicted less social curiosity. Our experiment demonstrates that reading narratives, or possibly texts with social content in general, may engage and fatigue social-cognitive abilities, causing a temporary decrease in social curiosity. Such texts might also temporarily satisfy the need for social connection, temporarily reducing social curiosity. Both accounts are in line with theories describing how narratives result in better social cognition over the long term. -
He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology, 77(8), 1745-1769. doi:10.1177/17470218231219971.
Abstract
Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement. -
Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.
Abstract
* Ole Goltermann and Gökberk Alagöz contributed equally.
Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution. -
Goncharova, M. V., Jadoul, Y., Reichmuth, C., Fitch, W. T., & Ravignani, A. (2024). Vocal tract dynamics shape the formant structure of conditioned vocalizations in a harbor seal. Annals of the New York Academy of Sciences, 1538(1), 107-116. doi:10.1111/nyas.15189.
Abstract
Formants, or resonance frequencies of the upper vocal tract, are an essential part of acoustic communication. Articulatory gestures—such as jaw, tongue, lip, and soft palate movements—shape formant structure in human vocalizations, but little is known about how nonhuman mammals use those gestures to modify formant frequencies. Here, we report a case study with an adult male harbor seal trained to produce an arbitrary vocalization composed of multiple repetitions of the sound wa. We analyzed jaw movements frame-by-frame and matched them to the tracked formant modulation in the corresponding vocalizations. We found that the jaw opening angle was strongly correlated with the first (F1) and, to a lesser degree, with the second formant (F2). F2 variation was better explained by the jaw angle opening when the seal was lying on his back rather than on the belly, which might derive from soft tissue displacement due to gravity. These results show that harbor seals share some common articulatory traits with humans, where the F1 depends more on the jaw position than F2. We propose further in vivo investigations of seals to further test the role of the tongue on formant modulation in mammalian sound production. -
De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.
Abstract
Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity. -
Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.
Abstract
To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience. -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.
Abstract
We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing. -
Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development. -
Kocsis, K., Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Harbour seals use rhythmic percussive signalling in interaction and display. Animal Behaviour, 207, 223-234. doi:10.1016/j.anbehav.2023.09.014.
Abstract
Multimodal rhythmic signalling abounds across animal taxa. Studying its mechanisms and functions can highlight adaptive components in highly complex rhythmic behaviours, like dance and music. Pinnipeds, such as the harbour seal, Phoca vitulina, are excellent comparative models to assess rhythmic capacities. Harbour seals engage in rhythmic percussive behaviours which, until now, have not been described in detail. In our study, eight zoo-housed harbour seals (two pups, two juveniles and four adults) were passively monitored by audio and video during their pupping/breeding season. All juvenile and adult animals performed percussive signalling with their fore flippers in agonistic conditions, both on land and in water. Flipper slap sequences produced on the ground or on the seals' bodies were often highly regular in their interval duration, that is, were quasi-isochronous, at a 200–600 beats/min pace. Three animals also showed significant lateralization in slapping. In contrast to slapping on land, display slapping in water, performed only by adult males, showed slower tempo by one order of magnitude, and a rather motivic temporal structure. Our work highlights that percussive communication is a significant part of harbour seals' behavioural repertoire. We hypothesize that its forms of rhythm production may reflect adaptive functions such as regulating internal states and advertising individual traits. -
Koutamanis, E. (2024). Spreading the word: Cross-linguistic influence in the bilingual child's lexicon. PhD Thesis, Radboud University, Nijmegen.
Additional information
full text via Radboud Repository -
Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cognate facilitation in single- and dual-language contexts in bilingual children’s word processing. Linguistic Approaches to Bilingualism, 14(4), 577-608. doi:10.1075/lab.23009.kou.
Abstract
We examined the extent to which cognate facilitation effects occurred in simultaneous bilingual children’s production and comprehension and how these were modulated by language dominance and language context. Bilingual Dutch-German children, ranging from Dutch-dominant to German-dominant, performed picture naming and auditory lexical decision tasks in single-language and dual-language contexts. Language context was manipulated with respect to the language of communication (with the experimenter and in instructional videos) and by means of proficiency tasks. Cognate facilitation effects emerged in both production and comprehension and interacted with both dominance and context. In a single-language context, stronger cognate facilitation effects were found for picture naming in children’s less dominant language, in line with previous studies on individual differences in lexical activation. In the dual-language context, this pattern was reversed, suggesting inhibition of the dominant language at the decision level. Similar effects were observed in lexical decision. These findings provide evidence for an integrated bilingual lexicon in simultaneous bilingual children and shed more light on the complex interplay between lexicon-internal and lexicon-external factors modulating the extent of lexical cross-linguistic influence more generally. -
Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cross-linguistic influence in the simultaneous bilingual child's lexicon: An eye-tracking and primed picture selection study. Bilingualism: Language and Cognition, 27(3), 377-387. doi:10.1017/S136672892300055X.
Abstract
In a between-language lexical priming study, we examined to what extent the two languages in a simultaneous bilingual child's lexicon interact, while taking individual differences in language exposure into account. Primary-school-aged Dutch–Greek bilinguals performed a primed picture selection task combined with eye-tracking. They matched pictures to auditorily presented Dutch target words preceded by Greek prime words. Their reaction times and eye movements were recorded. We tested for effects of between-language phonological priming, translation priming, and phonological priming through translation. Priming effects emerged in reaction times and eye movements in all three conditions, at different stages of processing, and unaffected by language exposure. These results extend previous findings for bilingual toddlers and bilingual adults. Processing similarities between these populations indicate that, across different stages of development, bilinguals have an integrated lexicon that is accessed in a language-nonselective way and is susceptible to interactions within and between different types of lexical representation.
-
Mamus, E. (2024). Perceptual experience shapes how blind and sighted people express concepts in multimodal language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
fullt text via Radboud Repository -
Mazzini, S., Yadnik, S., Timmers, I., Rubio-Gozalbo, E., & Jansma, B. M. (2024). Altered neural oscillations in classical galactosaemia during sentence production. Journal of Inherited Metabolic Disease, 47(4), 575-833. doi:10.1002/jimd.12740.
Abstract
Classical galactosaemia (CG) is a hereditary disease in galactose metabolism that despite dietary treatment is characterized by a wide range of cognitive deficits, among which is language production. CG brain functioning has been studied with several neuroimaging techniques, which revealed both structural and functional atypicalities. In the present study, for the first time, we compared the oscillatory dynamics, especially the power spectrum and time–frequency representations (TFR), in the electroencephalography (EEG) of CG patients and healthy controls while they were performing a language production task. Twenty-one CG patients and 19 healthy controls described animated scenes, either in full sentences or in words, indicating two levels of complexity in syntactic planning. Based on previous work on the P300 event related potential (ERP) and its relation with theta frequency, we hypothesized that the oscillatory activity of patients and controls would differ in theta power and TFR. With regard to behavior, reaction times showed that patients are slower, reflecting the language deficit. In the power spectrum, we observed significant higher power in patients in delta (1–3 Hz), theta (4–7 Hz), beta (15–30 Hz) and gamma (30–70 Hz) frequencies, but not in alpha (8–12 Hz), suggesting an atypical oscillatory profile. The time-frequency analysis revealed significantly weaker event-related theta synchronization (ERS) and alpha desynchronization (ERD) in patients in the sentence condition. The data support the hypothesis that CG language difficulties relate to theta–alpha brain oscillations.Additional information
table S1 and S2 -
Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.
Abstract
Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.Additional information
supplementary material -
Mooijman, S. (2024). Control of language in bilingual speakers with and without aphasia. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Mooijman, S., Schoonen, R., Roelofs, A., & Ruiter, M. B. (2024). Benefits of free language choice in bilingual individuals with aphasia. Aphasiology, 38(11), 1793-1831. doi:10.1080/02687038.2024.2326239.
Abstract
Background
Forced switching between languages poses demands on control abilities, which may be difficult to meet for bilinguals with aphasia. Freely choosing languages has been shown to increase naming efficiency in healthy bilinguals, and lexical accessibility was found to be a predictor for language choice. The overlap between bilingual language switching and other types of switching is yet unclear.
Aims
This study aimed to examine the benefits of free language choice for bilinguals with aphasia and to investigate the overlap of between- and within-language switching abilities.
Methods & Procedures
Seventeen bilinguals with aphasia completed a questionnaire and four web-based picture naming tasks: single-language naming in the first and second language separately; voluntary switching between languages; cued and predictable switching between languages; cued and predictable switching between phrase types in the first language. Accuracy and naming latencies were analysed using (generalised) linear mixed-effects models.
Outcomes & Results
The results showed higher accuracy and faster naming for the voluntary switching condition compared to single-language naming and cued switching. Both voluntary and cued language switching yielded switch costs, and voluntary switch costs were larger. Ease of lexical access was a reliable predictor for voluntary language choice. We obtained no statistical evidence for differences or associations between switch costs in between- and within-language switching.
Conclusions
Several results point to benefits of voluntary language switching for bilinguals with aphasia. Freely mixing languages improved naming accuracy and speed, and ease of lexical access affected language choice. There was no statistical evidence for overlap of between- and within-language switching abilities. This study highlights the benefits of free language choice for bilinguals with aphasia. -
Mooijman, S., Schoonen, R., Ruiter, M. B., & Roelofs, A. (2024). Voluntary and cued language switching in late bilingual speakers. Bilingualism: Language and Cognition, 27(4), 610-627. doi:10.1017/S1366728923000755.
Abstract
Previous research examining the factors that determine language choice and voluntary switching mainly involved early bilinguals. Here, using picture naming, we investigated language choice and switching in late Dutch–English bilinguals. We found that naming was overall slower in cued than in voluntary switching, but switch costs occurred in both types of switching. The magnitude of switch costs differed depending on the task and language, and was moderated by L2 proficiency. Self-rated rather than objectively assessed proficiency predicted voluntary switching and ease of lexical access was associated with language choice. Between-language and within-language switch costs were not correlated. These results highlight self-rated proficiency as a reliable predictor of voluntary switching, with language modulating switch costs. As in early bilinguals, ease of lexical access was related to word-level language choice of late bilinguals. -
Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2024). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review, 31, 849-861. doi:10.3758/s13423-023-02384-1.
Abstract
* These two authors contributed equally to this study
Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.
Additional information
supplementary file -
Peirolo, M., Meyer, A. S., & Frances, C. (2024). Investigating the causes of prosodic marking in self-repairs: An automatic process? In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 1080-1084). doi:10.21437/SpeechProsody.2024-218.Abstract
Natural speech involves repair. These repairs are often highlighted through prosodic marking (Levelt & Cutler, 1983). Prosodic marking usually entails an increase in pitch, loudness, and/or duration that draws attention to the corrected word. While it is established that natural self-repairs typically elicit prosodic marking, the exact cause of this is unclear. This study investigates whether producing a prosodic marking emerges from an automatic correction process or has a communicative purpose. In the current study, we elicit corrections to test whether all self-corrections elicit prosodic marking. Participants carried out a picture-naming task in which they described two images presented on-screen. To prompt self-correction, the second image was altered in some cases, requiring participants to abandon their initial utterance and correct their description to match the new image. This manipulation was compared to a control condition in which only the orientation of the object would change, eliciting no self-correction while still presenting a visual change. We found that the replacement of the item did not elicit a prosodic marking, regardless of the type of change. Theoretical implications and research directions are discussed, in particular theories of prosodic planning. -
Quaresima, A. (2024). A Bridge not too far: Neurobiological causal models of word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
de Reus, K., Benítez-Burraco, A., Hersh, T. A., Groot, N., Lambert, M. L., Slocombe, K. E., Vernes, S. C., & Raviv, L. (2024). Self-domestication traits in vocal learning mammals. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 105-108). Nijmegen: The Evolution of Language Conferences. -
Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2024). The timing of beat gestures affects lexical stress perception in Spanish. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings Speech Prosody 2024 (pp. 702-706). doi:10.21437/SpeechProsody.2024-142.Abstract
It has been shown that when speakers produce hand gestures, addressees are attentive towards these gestures, using them to facilitate speech processing. Even relatively simple “beat” gestures are taken into account to help process aspects of speech such as prosodic prominence. In fact, recent evidence suggests that the timing of a beat gesture can influence spoken word recognition. Termed the manual McGurk Effect, Dutch participants, when presented with lexical stress minimal pair continua in Dutch, were biased to hear lexical stress on the syllable that coincided with a beat gesture. However, little is known about how this manual McGurk effect would surface in languages other than Dutch, with different acoustic cues to prominence, and variable gestures. Therefore, this study tests the effect in Spanish where lexical stress is arguably even more important, being a contrastive cue in the regular verb conjugation system. Results from 24 participants corroborate the effect in Spanish, namely that when given the same auditory stimulus, participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture. These findings extend the manual McGurk effect to a different language, emphasizing the impact of gestures' timing on prosody perception and spoken word recognition. -
Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.
Abstract
Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system. -
Sander, J., Çetinçelik, M., Zhang, Y., Rowland, C. F., & Harmon, Z. (2024). Why does joint attention predict vocabulary acquisition? The answer depends on what coding scheme you use. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1607-1613).Abstract
Despite decades of study, we still know less than we would like about the association between joint attention (JA) and language acquisition. This is partly because of disagreements on how to operationalise JA. In this study, we examine the impact of applying two different, influential JA operationalisation schemes to the same dataset of child-caregiver interactions, to determine which yields a better fit to children's later vocabulary size. Two coding schemes— one defining JA in terms of gaze overlap and one in terms of social aspects of shared attention—were applied to video-recordings of dyadic naturalistic toy-play interactions (N=45). We found that JA was predictive of later production vocabulary when operationalised as shared focus (study 1), but also that its operationalisation as shared social awareness increased its predictive power (study 2). Our results emphasise the critical role of methodological choices in understanding how and why JA is associated with vocabulary size. -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.
Abstract
Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers. -
Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.Abstract
Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time. -
Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.Additional information
supplementary data -
Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.
Abstract
Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation. -
Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction. -
Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.
Abstract
Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production. -
Titus, A., & Peeters, D. (2024). Multilingualism at the market: A pre-registered immersive virtual reality study of bilingual language switching. Journal of Cognition, 7(1), 24-35. doi:10.5334/joc.359.
Abstract
Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals’ capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory. -
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 432-436).Abstract
The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception. -
Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.
Abstract
How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner. -
Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.
Abstract
Background
The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).
Methods
We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.
Results
Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.
Conclusions
The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits. -
Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.
Abstract
Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.Additional information
full stimulus list, the raw EEG data, and the analysis scripts -
Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2020). Sensory modality-independent activation of the brain network for language. The Journal of Neuroscience, 40(14), 2914-2924. doi:10.1523/JNEUROSCI.2271-19.2020.
Abstract
The meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects. -
Azar, Z. (2020). Effect of language contact on speech and gesture: The case of Turkish-Dutch bilinguals in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Baranova, J. (2020). Reasons for every-day activities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Barthel, M., & Levinson, S. C. (2020). Next speakers plan word forms in overlap with the incoming turn: Evidence from gaze-contingent switch task performance. Language, Cognition and Neuroscience, 35(9), 1183-1202. doi:10.1080/23273798.2020.1716030.
Abstract
To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.Additional information
Supplemental material -
Bosma, E., & Nota, N. (2020). Cognate facilitation in Frisian-Dutch bilingual children’s sentence reading: An eye-tracking study. Journal of Experimental Child Psychology, 189: 104699. doi:10.1016/j.jecp.2019.104699.
-
Bouhali, F., Mongelli, V., Thiebaut de Schotten, M., & Cohen, L. (2020). Reading music and words: The anatomical connectivity of musicians’ visual cortex. NeuroImage, 212: 116666. doi:10.1016/j.neuroimage.2020.116666.
Abstract
Musical score reading and word reading have much in common, from their historical origins to their cognitive foundations and neural correlates. In the ventral occipitotemporal cortex (VOT), the specialization of the so-called Visual Word Form Area for word reading has been linked to its privileged structural connectivity to distant language regions. Here we investigated how anatomical connectivity relates to the segregation of regions specialized for musical notation or words in the VOT. In a cohort of professional musicians and non-musicians, we used probabilistic tractography combined with task-related functional MRI to identify the connections of individually defined word- and music-selective left VOT regions. Despite their close proximity, these regions differed significantly in their structural connectivity, irrespective of musical expertise. The music-selective region was significantly more connected to posterior lateral temporal regions than the word-selective region, which, conversely, was significantly more connected to anterior ventral temporal cortex. Furthermore, musical expertise had a double impact on the connectivity of the music region. First, music tracts were significantly larger in musicians than in non-musicians, associated with marginally higher connectivity to perisylvian music-related areas. Second, the spatial similarity between music and word tracts was significantly increased in musicians, consistently with the increased overlap of language and music functional activations in musicians, as compared to non-musicians. These results support the view that, for music as for words, very specific anatomical connections influence the specialization of distinct VOT areas, and that reciprocally those connections are selectively enhanced by the expertise for word or music reading.Additional information
Supplementary data -
Connaughton, D. M., Dai, R., Owen, D. J., Marquez, J., Mann, N., Graham-Paquin, A. L., Nakayama, M., Coyaud, E., Laurent, E. M. N., St-Germain, J. R., Snijders Blok, L., Vino, A., Klämbt, V., Deutsch, K., Wu, C.-H.-W., Kolvenbach, C. M., Kause, F., Ottlewski, I., Schneider, R., Kitzler, T. M. and 79 moreConnaughton, D. M., Dai, R., Owen, D. J., Marquez, J., Mann, N., Graham-Paquin, A. L., Nakayama, M., Coyaud, E., Laurent, E. M. N., St-Germain, J. R., Snijders Blok, L., Vino, A., Klämbt, V., Deutsch, K., Wu, C.-H.-W., Kolvenbach, C. M., Kause, F., Ottlewski, I., Schneider, R., Kitzler, T. M., Majmundar, A. J., Buerger, F., Onuchic-Whitford, A. C., Youying, M., Kolb, A., Salmanullah, D., Chen, E., Van der Ven, A. T., Rao, J., Ityel, H., Seltzsam, S., Rieke, J. M., Chen, J., Vivante, A., Hwang, D.-Y., Kohl, S., Dworschak, G. C., Hermle, T., Alders, M., Bartolomaeus, T., Bauer, S. B., Baum, M. A., Brilstra, E. H., Challman, T. D., Zyskind, J., Costin, C. E., Dipple, K. M., Duijkers, F. A., Ferguson, M., Fitzpatrick, D. R., Fick, R., Glass, I. A., Hulick, P. J., Kline, A. D., Krey, I., Kumar, S., Lu, W., Marco, E. J., Wentzensen, I. M., Mefford, H. C., Platzer, K., Povolotskaya, I. S., Savatt, J. M., Shcherbakova, N. V., Senguttuvan, P., Squire, A. E., Stein, D. R., Thiffault, I., Voinova, V. Y., Somers, M. J. G., Ferguson, M. A., Traum, A. Z., Daouk, G. H., Daga, A., Rodig, N. M., Terhal, P. A., Van Binsbergen, E., Eid, L. A., Tasic, V., Rasouly, H. M., Lim, T. Y., Ahram, D. F., Gharavi, A. G., Reutter, H. M., Rehm, H. L., MacArthur, D. G., Lek, M., Laricchia, K. M., Lifton, R. P., Xu, H., Mane, S. M., Sanna-Cherchi, S., Sharrocks, A. D., Raught, B., Fisher, S. E., Bouchard, M., Khokha, M. K., Shril, S., & Hildebrandt, F. (2020). Mutations of the transcriptional corepressor ZMYM2 cause syndromic urinary tract malformations. The American Journal of Human Genetics, 107(4), 727-742. doi:10.1016/j.ajhg.2020.08.013.
Abstract
Congenital anomalies of the kidney and urinary tract (CAKUT) constitute one of the most frequent birth defects and represent the most common cause of chronic kidney disease in the first three decades of life. Despite the discovery of dozens of monogenic causes of CAKUT, most pathogenic pathways remain elusive. We performed whole-exome sequencing (WES) in 551 individuals with CAKUT and identified a heterozygous de novo stop-gain variant in ZMYM2 in two different families with CAKUT. Through collaboration, we identified in total 14 different heterozygous loss-of-function mutations in ZMYM2 in 15 unrelated families. Most mutations occurred de novo, indicating possible interference with reproductive function. Human disease features are replicated in X. tropicalis larvae with morpholino knockdowns, in which expression of truncated ZMYM2 proteins, based on individual mutations, failed to rescue renal and craniofacial defects. Moreover, heterozygous Zmym2-deficient mice recapitulated features of CAKUT with high penetrance. The ZMYM2 protein is a component of a transcriptional corepressor complex recently linked to the silencing of developmentally regulated endogenous retrovirus elements. Using protein-protein interaction assays, we show that ZMYM2 interacts with additional epigenetic silencing complexes, as well as confirming that it binds to FOXP1, a transcription factor that has also been linked to CAKUT. In summary, our findings establish that loss-of-function mutations of ZMYM2, and potentially that of other proteins in its interactome, as causes of human CAKUT, offering new routes for studying the pathogenesis of the disorder. -
Coopmans, C. W., & Schoenmakers, G.-J. (2020). Incremental structure building of preverbal PPs in Dutch. Linguistics in the Netherlands, 37(1), 38-52. doi:10.1075/avt.00036.coo.
Abstract
Incremental comprehension of head-final constructions can reveal structural attachment preferences for ambiguous phrases. This study investigates
how temporarily ambiguous PPs are processed in Dutch verb-final constructions. In De aannemer heeft op het dakterras bespaard/gewerkt ‘The
contractor has on the roof terrace saved/worked’, the PP is locally ambiguous between attachment as argument and as adjunct. This ambiguity is
resolved by the sentence-final verb. In a self-paced reading task, we manipulated the argument/adjunct status of the PP, and its position relative to the
verb. While we found no reading-time differences between argument and
adjunct PPs, we did find that transitive verbs, for which the PP is an argument, were read more slowly than intransitive verbs, for which the PP is an adjunct. We suggest that Dutch parsers have a preference for adjunct attachment of preverbal PPs, and discuss our findings in terms of incremental
parsing models that aim to minimize costly reanalysis.
-
Coopmans, C. W., & Nieuwland, M. S. (2020). Dissociating activation and integration of discourse referents: Evidence from ERPs and oscillations. Cortex, 126, 83-106. doi:10.1016/j.cortex.2019.12.028.
Abstract
A key challenge in understanding stories and conversations is the comprehension of ‘anaphora’, words that refer back to previously mentioned words or concepts (‘antecedents’). In psycholinguistic theories, anaphor comprehension involves the initial activation of the antecedent and its subsequent integration into the unfolding representation of the narrated event. A recent proposal suggests that these processes draw upon the brain’s recognition memory and language networks, respectively, and may be dissociable in patterns of neural oscillatory synchronization (Nieuwland & Martin, 2017). We addressed this proposal in an electroencephalogram (EEG) study with pre-registered data acquisition and analyses, using event-related potentials (ERPs) and neural oscillations. Dutch participants read two-sentence mini stories containing proper names, which were repeated or new (ease of activation) and semantically coherent or incoherent with the preceding discourse (ease of integration). Repeated names elicited lower N400 and Late Positive Component amplitude than new names, and also an increase in theta-band (4-7 Hz) synchronization, which was largest around 240-450 ms after name onset. Discourse-coherent names elicited an increase in gamma-band (60-80 Hz) synchronization compared to discourse-incoherent names. This effect was largest around 690-1000 ms after name onset and exploratory beamformer analysis suggested a left frontal source. We argue that the initial activation and subsequent discourse-level integration of referents can be dissociated with event-related EEG activity, and are associated with respectively theta- and gamma-band activity. These findings further establish the link between memory and language through neural oscillations.Additional information
materials, data, and analysis scripts -
Den Hoed, J., & Fisher, S. E. (2020). Genetic pathways involved in human speech disorders. Current Opinion in Genetics & Development, 65, 103-111. doi:10.1016/j.gde.2020.05.012.
-
Drijvers, L., & Ozyurek, A. (2020). Non-native listeners benefit less from gestures and visible speech than native listeners during degraded speech comprehension. Language and Speech, 63(2), 209-220. doi:10.1177/0023830919831311.
Abstract
Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient. -
Eekhof, L. S., Van Krieken, K., & Sanders, J. (2020). VPIP: A lexical identification procedure for perceptual, cognitive, and emotional viewpoint in narrative discourse. Open Library of Humanities, 6(1): 18. doi:10.16995/olh.483.
Abstract
Although previous work on viewpoint techniques has shown that viewpoint is ubiquitous in narrative discourse, approaches to identify and analyze the linguistic manifestations of viewpoint are currently scattered over different disciplines and dominated by qualitative methods. This article presents the ViewPoint Identification Procedure (VPIP), the first systematic method for the lexical identification of markers of perceptual, cognitive and emotional viewpoint in narrative discourse. Use of this step-wise procedure is facilitated by a large appendix of Dutch viewpoint markers. After the introduction of the procedure and discussion of some special cases, we demonstrate its application by discussing three types of narrative excerpts: a literary narrative, a news narrative, and an oral narrative. Applying the identification procedure to the full news narrative, we show that the VPIP can be reliably used to detect viewpoint markers in long stretches of narrative discourse. As such, the systematic identification of viewpoint has the potential to benefit both established viewpoint scholars and researchers from other fields interested in the analytical and experimental study of narrative and viewpoint. Such experimental studies could complement qualitative studies, ultimately advancing our theoretical understanding of the relation between the linguistic presentation and cognitive processing of viewpoint. Suggestions for elaboration of the VPIP, particularly in the realm of pragmatic viewpoint marking, are formulated in the final part of the paper.Additional information
appendix -
Egger, J., Rowland, C. F., & Bergmann, C. (2020). Improving the robustness of infant lexical processing speed measures. Behavior Research Methods, 52, 2188-2201. doi:10.3758/s13428-020-01385-5.
Abstract
Visual reaction times to target pictures after naming events are an informative measurement in language acquisition research, because gaze shifts measured in looking-while-listening paradigms are an indicator of infants’ lexical speed of processing. This measure is very useful, as it can be applied from a young age onwards and has been linked to later language development. However, to obtain valid reaction times, the infant is required to switch the fixation of their eyes from a distractor to a target object. This means that usually at least half the trials have to be discarded—those where the participant is already fixating the target at the onset of the target word—so that no reaction time can be measured. With few trials, reliability suffers, which is especially problematic when studying individual differences. In order to solve this problem, we developed a gaze-triggered looking-while-listening paradigm. The trials do not differ from the original paradigm apart from the fact that the target object is chosen depending on the infant’s eye fixation before naming. The object the infant is looking at becomes the distractor and the other object is used as the target, requiring a fixation switch, and thus providing a reaction time. We tested our paradigm with forty-three 18-month-old infants, comparing the results to those from the original paradigm. The Gaze-triggered paradigm yielded more valid reaction time trials, as anticipated. The results of a ranked correlation between the conditions confirmed that the manipulated paradigm measures the same concept as the original paradigm. -
Eijk, L., Fletcher, A., McAuliffe, M., & Janse, E. (2020). The effects of word frequency and word probability on speech rhythm in dysarthria. Journal of Speech, Language, and Hearing Research, 63, 2833-2845. doi:10.1044/2020_JSLHR-19-00389.
Abstract
Purpose
In healthy speakers, the more frequent and probable a word is in its context, the shorter the word tends to be. This study investigated whether these probabilistic effects were similarly sized for speakers with dysarthria of different severities.
Method
Fifty-six speakers of New Zealand English (42 speakers with dysarthria and 14 healthy speakers) were recorded reading the Grandfather Passage. Measurements of word duration, frequency, and transitional word probability were taken.
Results
As hypothesized, words with a higher frequency and probability tended to be shorter in duration. There was also a significant interaction between word frequency and speech severity. This indicated that the more severe the dysarthria, the smaller the effects of word frequency on speakers' word durations. Transitional word probability also interacted with speech severity, but did not account for significant unique variance in the full model.
Conclusions
These results suggest that, as the severity of dysarthria increases, the duration of words is less affected by probabilistic variables. These findings may be due to reductions in the control and execution of muscle movement exhibited by speakers with dysarthria. -
Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences. -
Faber, M., Mak, M., & Willems, R. M. (2020). Word skipping as an indicator of individual reading style during literary reading. Journal of Eye Movement Research, 13(3): 2. doi:10.16910/jemr.13.3.2.
Abstract
Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in ‘stable’ eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤ .11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r > .51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern. -
Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Gerakaki, S. (2020). The moment in between: Planning speech while listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Gilbers, S., Hoeksema, N., De Bot, K., & Lowie, W. (2020). Regional variation in West and East Coast African-American English prosody and rap flows. Language and Speech, 63(4), 713-745. doi:10.1177/0023830919881479.
Abstract
Regional variation in African-American English (AAE) is especially salient to its speakers involved with hip-hop culture, as hip-hop assigns great importance to regional identity and regional accents are a key means of expressing regional identity. However, little is known about AAE regional variation regarding prosodic rhythm and melody. In hip-hop music, regional variation can also be observed, with different regions’ rap performances being characterized by distinct “flows” (i.e., rhythmic and melodic delivery), an observation which has not been quantitatively investigated yet. This study concerns regional variation in AAE speech and rap, specifically regarding the United States’ East and West Coasts. It investigates how East Coast and West Coast AAE prosody are distinct, how East Coast and West Coast rap flows differ, and whether the two domains follow a similar pattern: more rhythmic and melodic variation on the West Coast compared to the East Coast for both speech and rap. To this end, free speech and rap recordings of 16 prominent African-American members of the East Coast and West Coast hip-hop communities were phonetically analyzed regarding rhythm (e.g., syllable isochrony and musical timing) and melody (i.e., pitch fluctuation) using a combination of existing and novel methodological approaches. The results mostly confirm the hypotheses that East Coast AAE speech and rap are less rhythmically diverse and more monotone than West Coast AAE speech and rap, respectively. They also show that regional variation in AAE prosody and rap flows pattern in similar ways, suggesting a connection between rhythm and melody in language and music. -
González Alonso, J., Alemán Bañón, J., DeLuca, V., Miller, D., Pereira Soares, S. M., Puig-Mayenco, E., Slaats, S., & Rothman, J. (2020). Event related potentials at initial exposure in third language acquisition: Implications from an artificial mini-grammar study. Journal of Neurolinguistics, 56: 100939. doi:10.1016/j.jneuroling.2020.100939.
Abstract
The present article examines the proposal that typology is a major factor guiding transfer selectivity in L3/Ln acquisition. We tested first exposure in L3/Ln using two artificial languages (ALs) lexically based in English and Spanish, focusing on gender agreement between determiners and nouns, and between nouns and adjectives. 50 L1 Spanish-L2 English speakers took part in the experiment. After receiving implicit training in one of the ALs (Mini-Spanish, N = 26; Mini-English, N = 24), gender violations elicited a fronto-lateral negativity in Mini-English in the earliest time window (200–500 ms), although this was not followed by any other differences in subsequent periods. This effect was highly localized, surfacing only in electrodes of the right-anterior region. In contrast, gender violations in Mini-Spanish elicited a broadly distributed positivity in the 300–600 ms time window. While we do not find typical indices of grammatical processing such as the P600 component, we believe that the between-groups differential appearance of the positivity for gender violations in the 300–600 ms time window reflects differential allocation of attentional resources as a function of the ALs’ lexical similarity to English or Spanish. We take these differences in attention to be precursors of the processes involved in transfer source selection in L3/Ln. -
Goriot, C., McQueen, J. M., Unsworth, S., & Van Hout, R. (2020). Perception of English phonetic contrasts by Dutch children: How bilingual are early-English learners? PLoS One, 15(3): e0229902. doi:10.1371/journal.pone.0229902.
Abstract
The aim of this study was to investigate whether early-English education benefits the perception
of English phonetic contrasts that are known to be perceptually confusable for Dutch
native speakers, comparing Dutch pupils who were enrolled in an early-English programme
at school from the age of four with pupils in a mainstream programme with English instruction
from the age of 11, and English-Dutch early bilingual children. Children were 4-5-yearolds
(start of primary school), 8-9-year-olds, or 11-12-year-olds (end of primary school).
Children were tested on four contrasts that varied in difficulty: /b/-/s/ (easy), /k/-/ɡ/ (intermediate),
/f/-/θ/ (difficult), /ε/-/æ/ (very difficult). Bilingual children outperformed the two other
groups on all contrasts except /b/-/s/. Early-English pupils did not outperform mainstream
pupils on any of the contrasts. This shows that early-English education as it is currently
implemented is not beneficial for pupils’ perception of non-native contrasts.Additional information
Supporting information -
Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K. and 341 moreGrasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K., Andersson, M., Ard, T., Armstrong, N. J., Ashley-Koch, A., Atkins, J. R., Bernard, M., Brouwer, R. M., Buimer, E. E. L., Bülow, R., Bürger, C., Cannon, D. M., Chakravarty, M., Chen, Q., Cheung, J. W., Couvy-Duchesne, B., Dale, A. M., Dalvie, S., De Araujo, T. K., De Zubicaray, G. I., De Zwarte, S. M. C., Den Braber, A., Doan, N. T., Dohm, K., Ehrlich, S., Engelbrecht, H.-R., Erk, S., Fan, C. C., Fedko, I. O., Foley, S. F., Ford, J. M., Fukunaga, M., Garrett, M. E., Ge, T., Giddaluru, S., Goldman, A. L., Green, M. J., Groenewold, N. A., Grotegerd, D., Gurholt, T. P., Gutman, B. A., Hansell, N. K., Harris, M. A., Harrison, M. B., Haswell, C. C., Hauser, M., Herms, S., Heslenfeld, D. J., Ho, N. F., Hoehn, D., Hoffmann, P., Holleran, L., Hoogman, M., Hottenga, J.-J., Ikeda, M., Janowitz, D., Jansen, I. E., Jia, T., Jockwitz, C., Kanai, R., Karama, S., Kasperaviciute, D., Kaufmann, T., Kelly, S., Kikuchi, M., Klein, M., Knapp, M., Knodt, A. R., Krämer, B., Lam, M., Lancaster, T. M., Lee, P. H., Lett, T. A., Lewis, L. B., Lopes-Cendes, I., Luciano, M., Macciardi, F., Marquand, A. F., Mathias, S. R., Melzer, T. R., Milaneschi, Y., Mirza-Schreiber, N., Moreira, J. C. V., Mühleisen, T. W., Müller-Myhsok, B., Najt, P., Nakahara, S., Nho, K., Olde Loohuis, L. M., Orfanos, D. P., Pearson, J. F., Pitcher, T. L., Pütz, B., Quidé, Y., Ragothaman, A., Rashid, F. M., Reay, W. R., Redlich, R., Reinbold, C. S., Repple, J., Richard, G., Riedel, B. C., Risacher, S. L., Rocha, C. S., Mota, N. R., Salminen, L., Saremi, A., Saykin, A. J., Schlag, F., Schmaal, L., Schofield, P. R., Secolin, R., Shapland, C. Y., Shen, L., Shin, J., Shumskaya, E., Sønderby, I. E., Sprooten, E., Tansey, K. E., Teumer, A., Thalamuthu, A., Tordesillas-Gutiérrez, D., Turner, J. A., Uhlmann, A., Vallerga, C. L., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, L., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Tol, M.-J., Veldink, J. H., Verhoef, E., Walton, E., Wang, M., Wang, Y., Wardlaw, J. M., Wen, W., Westlye, L. T., Whelan, C. D., Witt, S. H., Wittfeld, K., Wolf, C., Wolfers, T., Wu, J. Q., Yasuda, C. L., Zaremba, D., Zhang, Z., Zwiers, M. P., Artiges, E., Assareh, A. A., Ayesa-Arriola, R., Belger, A., Brandt, C. L., Brown, G. G., Cichon, S., Curran, J. E., Davies, G. E., Degenhardt, F., Dennis, M. F., Dietsche, B., Djurovic, S., Doherty, C. P., Espiritu, R., Garijo, D., Gil, Y., Gowland, P. A., Green, R. C., Häusler, A. N., Heindel, W., Ho, B.-C., Hoffmann, W. U., Holsboer, F., Homuth, G., Hosten, N., Jack Jr., C. R., Jang, M., Jansen, A., Kimbrel, N. A., Kolskår, K., Koops, S., Krug, A., Lim, K. O., Luykx, J. J., Mathalon, D. H., Mather, K. A., Mattay, V. S., Matthews, S., Mayoral Van Son, J., McEwen, S. C., Melle, I., Morris, D. W., Mueller, B. A., Nauck, M., Nordvik, J. E., Nöthen, M. M., O’Leary, D. S., Opel, N., Paillère Martinot, M.-L., Pike, G. B., Preda, A., Quinlan, E. B., Rasser, P. E., Ratnakar, V., Reppermund, S., Steen, V. M., Tooney, P. A., Torres, F. R., Veltman, D. J., Voyvodic, J. T., Whelan, R., White, T., Yamamori, H., Adams, H. H. H., Bis, J. C., Debette, S., Decarli, C., Fornage, M., Gudnason, V., Hofer, E., Ikram, M. A., Launer, L., Longstreth, W. T., Lopez, O. L., Mazoyer, B., Mosley, T. H., Roshchupkin, G. V., Satizabal, C. L., Schmidt, R., Seshadri, S., Yang, Q., Alzheimer’s Disease Neuroimaging Initiative, CHARGE Consortium, EPIGEN Consortium, IMAGEN Consortium, SYS Consortium, Parkinson’s Progression Markers Initiative, Alvim, M. K. M., Ames, D., Anderson, T. J., Andreassen, O. A., Arias-Vasquez, A., Bastin, M. E., Baune, B. T., Beckham, J. C., Blangero, J., Boomsma, D. I., Brodaty, H., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bustillo, J. R., Cahn, W., Cairns, M. J., Calhoun, V., Carr, V. J., Caseras, X., Caspers, S., Cavalleri, G. L., Cendes, F., Corvin, A., Crespo-Facorro, B., Dalrymple-Alford, J. C., Dannlowski, U., De Geus, E. J. C., Deary, I. J., Delanty, N., Depondt, C., Desrivières, S., Donohoe, G., Espeseth, T., Fernández, G., Fisher, S. E., Flor, H., Forstner, A. J., Francks, C., Franke, B., Glahn, D. C., Gollub, R. L., Grabe, H. J., Gruber, O., Håberg, A. K., Hariri, A. R., Hartman, C. A., Hashimoto, R., Heinz, A., Henskens, F. A., Hillegers, M. H. J., Hoekstra, P. J., Holmes, A. J., Hong, L. E., Hopkins, W. D., Hulshoff Pol, H. E., Jernigan, T. L., Jönsson, E. G., Kahn, R. S., Kennedy, M. A., Kircher, T. T. J., Kochunov, P., Kwok, J. B. J., Le Hellard, S., Loughland, C. M., Martin, N. G., Martinot, J.-L., McDonald, C., McMahon, K. L., Meyer-Lindenberg, A., Michie, P. T., Morey, R. A., Mowry, B., Nyberg, L., Oosterlaan, J., Ophoff, R. A., Pantelis, C., Paus, T., Pausova, Z., Penninx, B. W. J. H., Polderman, T. J. C., Posthuma, D., Rietschel, M., Roffman, J. L., Rowland, L. M., Sachdev, P. S., Sämann, P. G., Schall, U., Schumann, G., Scott, R. J., Sim, K., Sisodiya, S. M., Smoller, J. W., Sommer, I. E., St Pourcain, B., Stein, D. J., Toga, A. W., Trollor, J. N., Van der Wee, N. J. A., van 't Ent, D., Völzke, H., Walter, H., Weber, B., Weinberger, D. R., Wright, M. J., Zhou, J., Stein, J. L., Thompson, P. M., & Medland, S. E. (2020). The genetic architecture of the human cerebral cortex. Science, 367(6484): eaay6690. doi:10.1126/science.aay6690.
Abstract
The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson’s disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder. -
Hahn, L. E., Ten Buuren, M., Snijders, T. M., & Fikkert, P. (2020). Learning words in a second language while cycling and listening to children’s songs: The Noplica Energy Center. International Journal of Music in Early Childhood, 15(1), 95-108. doi:10.1386/ijmec_00014_1.
Abstract
Children’s songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring children’s songs in a playhouse. The playhouse is designed by the Noplica foundation (www.noplica.nl) to advance language learning through unsupervised play. We present data from three experiments that serve to scientifically proof the functionality of one game of the playhouse: the Energy Center. For this game, children move three hand-bikes mounted on a panel within the playhouse. Once the children cycle, a song starts playing that is accompanied by musical instruments. In our experiments, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our experiments were run in the field, one at a Dutch and one at an Indian pre-school. The third experiment features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from the songs they heard. We will pick up upon these promising observations during future studies. -
Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2020). Six-month-old infants recognize phrases in song and speech. Infancy, 25(5), 699-718. doi:10.1111/infa.12357.
Abstract
Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well‐attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six‐month‐old Dutch infants (n = 80) were tested in the song or speech modality in the head‐turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well‐formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well‐formed sequence, but only in a more fine‐grained analysis. The preference for well‐formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.Additional information
infa12357-sup-0001-supinfo.zip -
Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (
Eds. ), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.Abstract
The representations generated by many mod-
els of language (word embeddings, recurrent
neural networks and transformers) correlate
to brain activity recorded while people read.
However, these decoding results are usually
based on the brain’s reaction to syntactically
and semantically sound language stimuli. In
this study, we asked: how does an LSTM (long
short term memory) language model, trained
(by and large) on semantically and syntac-
tically intact language, represent a language
sample with degraded semantic or syntactic
information? Does the LSTM representation
still resemble the brain’s reaction? We found
that, even for some kinds of nonsensical lan-
guage, there is a statistically significant rela-
tionship between the brain’s activity and the
representations of an LSTM. This indicates
that, at least in some instances, LSTMs and the
human brain handle nonsensical data similarly. -
Hoeksema, N., Villanueva, S., Mengede, J., Salazar-Casals, A., Rubio-García, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2020). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 162-164). Nijmegen: The Evolution of Language Conferences. -
Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences. -
Hubers, F., Redl, T., De Vos, H., Reinarz, L., & De Hoop, H. (2020). Processing prescriptively incorrect comparative particles: Evidence from sentence-matching and eye-tracking. Frontiers in Psychology, 11: 186. doi:10.3389/fpsyg.2020.00186.
Abstract
Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan ‘lit. better as John’). From a theoretical linguist’s point of view, these so-called grammatical norm violations are perfectly grammatical, even though they are not part of the language’s prescriptive grammar. In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether grammatical norm violations are processed as truly grammatical, as truly ungrammatical, or whether they fall in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that grammatical norm violations are not processed on a par with ungrammatical sentences, the results of all three experiments clearly show that grammatical norm violations cannot be considered grammatical, either.Additional information
Supplementary Material -
Hubers, F., Trompenaars, T., Collin, S., De Schepper, K., & De hoop, H. (2020). Hypercorrection as a by-product of education. Applied Linguistics, 41(4), 552-574. doi:10.1093/applin/amz001.
Abstract
Prescriptive grammar rules are taught in education, generally to ban the use of certain frequently encountered constructions in everyday language. This may lead to hypercorrection, meaning that the prescribed form in one construction is extended to another one in which it is in fact prohibited by prescriptive grammar. We discuss two such cases in Dutch: the hypercorrect use of the comparative particle dan ‘than’ in equative constructions, and the hypercorrect use of the accusative pronoun hen ‘them’ for a dative object. In two experiments, high school students of three educational levels were tested on their use of these hypercorrect forms (nexp1 = 162, nexp2 = 159). Our results indicate an overall large amount of hypercorrection across all levels of education, including pre-university level students who otherwise perform better in constructions targeted by prescriptive grammar rules. We conclude that while teaching prescriptive grammar rules to high school students seems to increase their use of correct forms in certain constructions, this comes at a cost of hypercorrection in others. -
Hubers, F. (2020). Two of a kind: Idiomatic expressions by native speakers and second language learners. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.
Abstract
An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning. -
Iacozza, S. (2020). Exploring social biases in language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.
Abstract
Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing -
Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.
Abstract
During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects -
Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.
Abstract
Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information. -
Khoe, Y. H., Tsoukala, C., Kootstra, G. J., & Frank, S. L. (2020). Modeling cross-language structural priming in sentence production. In T. C. Stewart (
Ed. ), Proceedings of the 18th Annual Meeting of the International Conference on Cognitive Modeling (pp. 131-137). University Park, PA, USA: The Penn State Applied Cognitive Science Lab.Abstract
A central question in the psycholinguistic study of multilingualism is how syntax is shared across languages. We implement a model to investigate whether error-based implicit learning can provide an account of cross-language structural priming. The model is based on the Dual-path model of
sentence-production (Chang, 2002). We implement our model using the Bilingual version of Dual-path (Tsoukala, Frank, & Broersma, 2017). We answer two main questions: (1) Can structural priming of active and passive constructions occur between English and Spanish in a bilingual version of the Dual-
path model? (2) Does cross-language priming differ quantitatively from within-language priming in this model? Our results show that cross-language priming does occur in the model. This finding adds to the viability of implicit learning as an account of structural priming in general and cross-language
structural priming specifically. Furthermore, we find that the within-language priming effect is somewhat stronger than the cross-language effect. In the context of mixed results from
behavioral studies, we interpret the latter finding as an indication that the difference between cross-language and within-
language priming is small and difficult to detect statistically. -
Lattenkamp, E. Z., Linnenschmidt, M., Mardus, E., Vernes, S. C., Wiegrebe, L., & Schutte, M. (2020). Impact of auditory feedback on bat vocal development. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 249-251). Nijmegen: The Evolution of Language Conferences. -
Lattenkamp, E. Z. (2020). Vocal learning in the pale spear-nosed bat, Phyllostomus discolor. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2020). Vocal production learning in the pale spear-nosed bat, Phyllostomus discolor. Biology Letters, 16: 20190928. doi:10.1098/rsbl.2019.0928.
Abstract
Vocal production learning (VPL), or the ability to modify vocalizations through the imitation of sounds, is a rare trait in the animal kingdom. While humans are exceptional vocal learners, few other mammalian species share this trait. Owing to their singular ecology and lifestyle, bats are highly specialized for the precise emission and reception of acoustic signals. This specialization makes them ideal candidates for the study of vocal learning, and several bat species have previously shown evidence supportive of vocal learning. Here we use a sophisticated automated set-up and a contingency training paradigm to explore the vocal learning capacity of pale spear-nosed bats. We show that these bats are capable of directional change of the fundamental frequency of their calls according to an auditory target. With this study, we further highlight the importance of bats for the study of vocal learning and provide evidence for the VPL capacity of the pale spear-nosed bat. -
Lei, L., Raviv, L., & Alday, P. M. (2020). Using spatial visualizations and real-world social networks to understand language evolution and change. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 252-254). Nijmegen: The Evolution of Language Conferences. -
Mak, M., De Vries, C., & Willems, R. M. (2020). The influence of mental imagery instructions and personality characteristics on reading experiences. Collabra: Psychology, 6(1): 43. doi:10.1525/collabra.281.
Abstract
It is well established that readers form mental images when reading a narrative. However, the consequences of mental imagery (i.e. the influence of mental imagery on the way people experience stories) are still unclear. Here we manipulated the amount of mental imagery that participants engaged in while reading short literary stories in two experiments. Participants received pre-reading instructions aimed at encouraging or discouraging mental imagery. After reading, participants answered questions about their reading experiences. We also measured individual trait differences that are relevant for literary reading experiences. The results from the first experiment suggests an important role of mental imagery in determining reading experiences. However, the results from the second experiment show that mental imagery is only a weak predictor of reading experiences compared to individual (trait) differences in how imaginative participants were. Moreover, the influence of mental imagery instructions did not extend to reading experiences unrelated to mental imagery. The implications of these results for the relationship between mental imagery and reading experiences are discussed. -
Manhardt, F., Ozyurek, A., Sumer, B., Mulder, K., Karadöller, D. Z., & Brouwer, S. (2020). Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(9), 1735-1753. doi:10.1037/xlm0000843.
Abstract
To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.Additional information
Supplementary materials
Share this page