Displaying 201 - 300 of 5803
-
Ozker, M., Yu, L., Dugan, P., Doyle, W., Friedman, D., Devinsky, O., & Flinker, A. (2024). Speech-induced suppression and vocal feedback sensitivity in human cortex. eLife, 13: RP94198. doi:10.7554/eLife.94198.1.
Abstract
Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity. -
Pacella, V., Nozais, V., Talozzi, L., Abdallah, M., Wassermann, D., Forkel, S. J., & Thiebaut de Schotten, M. (2024). The morphospace of the brain-cognition organisation. Nature Communications, 15: 8452. doi:10.1038/s41467-024-52186-9.
Abstract
Over the past three decades, functional neuroimaging has amassed abundant evidence of the intricate interplay between brain structure and function. However, the potential anatomical and experimental overlap, independence, granularity, and gaps between functions remain poorly understood. Here, we show the latent structure of the current brain-cognition knowledge and its organisation. Our approach utilises the most comprehensive meta-analytic fMRI database (Neurosynth) to compute a three-dimensional embedding space–morphospace capturing the relationship between brain functions as we currently understand them. The space structure enables us to statistically test the relationship between functions expressed as the degree to which the characteristics of each functional map can be anticipated based on its similarities with others–the predictability index. The morphospace can also predict the activation pattern of new, unseen functions and decode thoughts and inner states during movie watching. The framework defined by the morphospace will spur the investigation of novel functions and guide the exploration of the fabric of human cognition.Additional information
supplementary material -
Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2024). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review, 31, 849-861. doi:10.3758/s13423-023-02384-1.
Abstract
* These two authors contributed equally to this study
Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.
Additional information
supplementary file -
Pereira Soares, S. M., Prystauka, Y., DeLuca, V., Poch Pérez Botija, C., & Rothman, J. (2024). Brain correlates of attentional load processing reflect degree of bilingual engagement: Evidence from EEG. NeuroImage, 298: 120786. doi:10.1016/j.neuroimage.2024.120786.
Abstract
The present study uses electroencephalography (EEG) with an N-back task (0-, 1-, and 2-back) to investigate if and how individual bilingual experiences modulate brain activity and cognitive processes. The N-back is an especially appropriate task given recent proposals situating bilingual effects on neurocognition within the broader attentional control system (Bialystok & Craik, 2022). Beyond its working memory component, the N-Back task builds in complexity incrementally, progressively taxing the attentional system. EEG, behavioral and language/social background data were collected from 60 bilinguals. Two cognitive loads were calculated: low (1-back minus 0-back) and high (2-back minus 0-back). Behavioral performance and brain recruitment were modeled as a function of individual differences in bilingual engagement. We predicted task performance as modulated by bilingual engagement would reflect cognitive demands of increased complexity: slower reaction times and lower accuracy, and increase in theta, decrease in alpha and modulated N2/P3 amplitudes. The data show no modulation of the expected behavioral effects by degree of bilingual engagement. However, individual differences analyses reveal significant correlations between non-societal language use in Social contexts and alpha in the low cognitive load condition and age of acquisition of the L2/2L1 with theta in the high cognitive load. These findings lend some initial support to Bialystok & Craik (2022), showing how certain adaptations at the brain level take place in order to deal with the cognitive demands associated with variations in bilingual language experience and increases in attentional load. Furthermore, the present data highlight how these effects can play out differentially depending on cognitive testing/modalities – that is, effects were found at the TFR level but not behaviorally or in the ERPs, showing how the choice of analysis can be deterministic when investigating bilingual effects.Additional information
scripts and data -
Perugini, A., Fontanillas, P., Gordon, S. D., Fisher, S. E., Martin, N. G., Bates, T. C., & Luciano, M. (2024). Dyslexia polygenic scores show heightened prediction of verbal working memory and arithmetic. Scientific Studies of Reading, 28(5), 549-563. doi:10.1080/10888438.2024.2365697.
Abstract
Purpose
The aim of this study is to establish which specific cognitive abilities are phenotypically related to reading skill in adolescence and determine whether this phenotypic correlation is explained by polygenetic overlap.
Method
In an Australian population sample of twins and non-twin siblings of European ancestry (734 ≤ N ≤ 1542 [50.7% < F < 66%], mean age = 16.7, range = 11–28 years) from the Brisbane Adolescent Twin Study, mixed-effects models were used to test the association between a dyslexia polygenic score (based on genome-wide association results from a study of 51,800 dyslexics versus >1 million controls) and quantitative cognitive measures. The variance in the cognitive measure explained by the polygenic score was compared to that explained by a reading difficulties phenotype (scores that were lower than 1.5 SD below the mean reading skill) to derive the proportion of the association due to genetic influences.
Results
The strongest phenotypic correlations were between poor reading and verbal tests (R2 up to 6.2%); visuo-spatial working memory was the only measure that did not show association with poor reading. Dyslexia polygenic scores could completely explain the phenotypic covariance between poor reading and most working memory tasks and were most predictive of performance on a test of arithmetic (R2=2.9%).
Conclusion
Shared genetic pathways are thus highlighted for the commonly found association between reading and mathematics abilities, and for the verbal short-term/working memory deficits often observed in dyslexia.Additional information
supplementary materials -
Picciulin, M., Bolgan, M., & Burchardt, L. (2024). Rhythmic properties of Sciaena umbra calls across space and time in the Mediterranean Sea. PLOS ONE, 19(2): e0295589. doi:10.1371/journal.pone.0295589.
Abstract
In animals, the rhythmical properties of calls are known to be shaped by physical constraints and the necessity of conveying information. As a consequence, investigating rhythmical properties in relation to different environmental conditions can help to shed light on the relationship between environment and species behavior from an evolutionary perspective. Sciaena umbra (fam. Sciaenidae) male fish emit reproductive calls characterized by a simple isochronous, i.e., metronome-like rhythm (the so-called R-pattern). Here, S. umbra R-pattern rhythm properties were assessed and compared between four different sites located along the Mediterranean basin (Mallorca, Venice, Trieste, Crete); furthermore, for one location, two datasets collected 10 years apart were available. Recording sites differed in habitat types, vessel density and acoustic richness; despite this, S. umbra R-calls were isochronous across all locations. A degree of variability was found only when considering the beat frequency, which was temporally stable, but spatially variable, with the beat frequency being faster in one of the sites (Venice). Statistically, the beat frequency was found to be dependent on the season (i.e. month of recording) and potentially influenced by the presence of soniferous competitors and human-generated underwater noise. Overall, the general consistency in the measured rhythmical properties (isochrony and beat frequency) suggests their nature as a fitness-related trait in the context of the S. umbra reproductive behavior and calls for further evaluation as a communicative cue.Additional information
Picciulin_Bolgan_Burchardt_2024suppl_rhythmic properties of....docx -
Di Pisa, G., Pereira Soares, S. M., Rothman, J., & Marinis, T. (2024). Being a heritage speaker matters: the role of markedness in subject-verb person agreement in Italian. Frontiers in Psychology, 15: 1321614. doi:10.3389/fpsyg.2024.1321614.
Abstract
This study examines online processing and offline judgments of subject-verb person agreement with a focus on how this is impacted by markedness in heritage speakers (HSs) of Italian. To this end, 54 adult HSs living in Germany and 40 homeland Italian speakers completed a self-paced reading task (SPRT) and a grammaticality judgment task (GJT). Markedness was manipulated by probing agreement with both first-person (marked) and third-person (unmarked) subjects. Agreement was manipulated by crossing first-person marked subjects with third-person unmarked verbs and vice versa. Crucially, person violations with 1st person subjects (e.g., io *suona la chitarra “I plays-3rd-person the guitar”) yielded significantly shorter RTs in the SPRT and higher accuracy in the GJT than the opposite error type (e.g., il giornalista *esco spesso “the journalist go-1st-person out often”). This effect is consistent with the claim that when the first element in the dependency is marked (first person), the parser generates stronger predictions regarding upcoming agreeing elements. These results nicely align with work from the same populations investigating the impact of morphological markedness on grammatical gender agreement, suggesting that markedness impacts agreement similarly in two distinct grammatical domains and that sensitivity to markedness is more prevalent for HSs.Additional information
di_pisa_etal_2024_sup.DOCX -
Pizarro-Guevara, J. S., & Garcia, R. (2024). Philippine Psycholinguistics. Annual Review of Linguistics, 10, 145-167. doi:10.1146/annurev-linguistics-031522-102844.
Abstract
Over the last decade, there has been a slow but steady accumulation of psycholinguistic research focusing on typologically diverse languages. In this review, we provide an overview of the psycholinguistic research on Philippine languages at the sentence level. We first discuss the grammatical features of these languages that figure prominently in existing research. We identify four linguistic domains that have received attention from language researchers and summarize the empirical terrain. We advance two claims that emerge across these different domains: (a) The agent-first pressure plays a central role in many of the findings, and (b) the generalization that the patient argument is the syntactically privileged argument cannot be reduced to frequency, but instead is an emergent phenomenon caused by the alignment of competing pressures toward an optimal candidate. We connect these language-specific claims to language-general theories of sentence processing. -
Punselie, S., McLean, B., & Dingemanse, M. (2024). The anatomy of iconicity: Cumulative structural analogies underlie objective and subjective measures of iconicity. Open Mind, 8, 1191-1212. doi:10.1162/opmi_a_00162.
Abstract
The vocabularies of natural languages harbour many instances of iconicity, where words show a perceived resemblance between aspects of form and meaning. An open challenge in this domain is how to reconcile different operationalizations of iconicity and link them to an empirically grounded theory. Here we combine three ways of looking at iconicity using a set of 239 iconic words from 5 spoken languages (Japanese, Korean, Semai, Siwu and Ewe). Data on guessing accuracy serves as a baseline measure of probable iconicity and provides variation that we seek to explain and predict using structure-mapping theory and iconicity ratings. We systematically trace a range of cross-linguistically attested form-meaning correspondences in the dataset, yielding a word-level measure of cumulative iconicity that we find to be highly predictive of guessing accuracy. In a rating study, we collect iconicity judgments for all words from 78 participants. The ratings are well-predicted by our measure of cumulative iconicity and also correlate strongly with guessing accuracy, showing that rating tasks offer a scalable method to measure iconicity. Triangulating the measures reveals how structure-mapping can help open the black box of experimental measures of iconicity. While none of the methods is perfect, taken together they provide a well-rounded way to approach the meaning and measurement of iconicity in natural language vocabulary. -
Rapado-Tamarit, B., Méndez-Aróstegui, M., de Reus, K., Sarraude, T., Pen, I., & Groothuis, T. G. G. (2024). Age estimation and growth patterns in young harbor seals (Phoca vitulina vitulina) during rehabilitation. Journal of Mammalogy. Advance online publication. doi:10.1093/jmammal/gyae128.
Abstract
To study patterns in behavior, fitness, and population dynamics, estimating the age of the individuals is often a necessity. Specifically, age estimation of young animals is very important for animal rehabilitation centers because it may determine if the animal should be taken in and, if so, what care is optimal for its rehabilitation. Accurate age estimation is also important to determine the growth pattern of an individual, and it is needed to correctly interpret the influence of early body condition on its growth trajectories. The purpose of our study was to find body measurements that function as good age estimators in young (up to 3 months old) harbor seals (Phoca vitulina vitulina), placing emphasis on noninvasive techniques that can be used in the field. To meet this goal, body mass (BM), dorsal standard length (DSL), upper canine length (CL), body condition (BC), and sex were determined from 45 Harbor Seal pups of known age. Generalized additive mixed models were fitted to find how well these morphometric measures predicted age, and the results from the selected model were used to compute growth curves and to create a practical table to determine the age of young animals in the field. We found that both DSL and CL—and to some extent sex—were useful predictors for estimating age in young harbor seals and that the growth rate of pups raised in captivity is significantly lower than for those raised in the wild. In addition, we found no evidence for compensatory growth, given that animals that arrived at the center with a poor BM or BC continued to show lower BM or BC throughout almost the entire rehabilitation period.Additional information
Data availability -
Rasenberg, M., & Dingemanse, M. (2024). Drifting in a sea of semiosis. Current Anthropology, 65(3), 14-15.
Abstract
We welcome Enfield and Zuckerman’s (E&Z’s) rich exposition on how people congregate around shared representations. Moorings are a useful addition to our tools for thinking about signs and their uses. As public fixtures to which actions, statuses, and experiences may be tied, moorings evoke Geertz’s (1973) webs of significance, Millikan’s (2005) public conventions, and Clark’s (2015) common ground, but they add to these accounts a focus on the sign and the promise of understanding in more detail how people come to share and calibrate experiences. -
Rasing, N. B., Van de Geest-Buit, W., Chan, O. Y. A., Mul, K., Lanser, A., Erasmus, C. E., Groothuis, J. T., Holler, J., Ingels, K. J. A. O., Post, B., Siemann, I., & Voermans, N. C. (2024). Psychosocial functioning in patients with altered facial expression: A scoping review in five neurological diseases. Disability and Rehabilitation, 46(17), 3772-3791. doi:10.1080/09638288.2023.2259310.
Abstract
Purpose
To perform a scoping review to investigate the psychosocial impact of having an altered facial expression in five neurological diseases.
Methods
A systematic literature search was performed. Studies were on Bell’s palsy, facioscapulohumeral muscular dystrophy (FSHD), Moebius syndrome, myotonic dystrophy type 1, or Parkinson’s disease patients; had a focus on altered facial expression; and had any form of psychosocial outcome measure. Data extraction focused on psychosocial outcomes.
Results
Bell’s palsy, myotonic dystrophy type 1, and Parkinson’s disease patients more often experienced some degree of psychosocial distress than healthy controls. In FSHD, facial weakness negatively influenced communication and was experienced as a burden. The psychosocial distress applied especially to women (Bell’s palsy and Parkinson’s disease), and patients with more severely altered facial expression (Bell’s palsy), but not for Moebius syndrome patients. Furthermore, Parkinson’s disease patients with more pronounced hypomimia were perceived more negatively by observers. Various strategies were reported to compensate for altered facial expression.
Conclusions
This review showed that patients with altered facial expression in four of five included neurological diseases had reduced psychosocial functioning. Future research recommendations include studies on observers’ judgements of patients during social interactions and on the effectiveness of compensation strategies in enhancing psychosocial functioning.
Implications for rehabilitation
Negative effects of altered facial expression on psychosocial functioning are common and more abundant in women and in more severely affected patients with various neurological disorders.
Health care professionals should be alert to psychosocial distress in patients with altered facial expression.
Learning of compensatory strategies could be a beneficial therapy for patients with psychosocial distress due to an altered facial expression. -
Ronderos, C. R., Aparicio, H., Long, M., Shukla, V., Jara-Ettinger, J., & Rubio-Fernandez, P. (2024). Perceptual, semantic, and pragmatic factors affect the derivation of contrastive inferences. Open mind: discoveries in cognitive science, 8, 1213-1227. doi:10.1162/opmi_a_00165.
Abstract
People derive contrastive inferences when interpreting adjectives (e.g., inferring that ‘the short pencil’ is being contrasted with a longer one). However, classic eye-tracking studies revealed contrastive inferences with scalar and material adjectives, but not with color adjectives. This was explained as a difference in listeners’ informativity expectations, since color adjectives are often used descriptively (hence not warranting a contrastive interpretation). Here we hypothesized that, beyond these pragmatic factors, perceptual factors (i.e., the relative perceptibility of color, material and scalar contrast) and semantic factors (i.e., the difference between gradable and non-gradable properties) also affect the real-time derivation of contrastive inferences. We tested these predictions in three languages with prenominal modification (English, Hindi, and Hungarian) and found that people derive contrastive inferences for color and scalar adjectives, but not for material adjectives. In addition, the processing of scalar adjectives was more context dependent than that of color and material adjectives, confirming that pragmatic, perceptual and semantic factors affect the derivation of contrastive inferences.
-
Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.
Abstract
Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system. -
Rowland, C. F., Bidgood, A., Jones, G., Jessop, A., Stinson, P., Pine, J. M., Durrant, S., & Peter, M. S. (2024). Simulating the relationship between nonword repetition performance and vocabulary growth in 2-Year-olds: Evidence from the language 0–5 project. Language Learning. Advance online publication. doi:10.1111/lang.12671.
Abstract
A strong predictor of children's language is performance on non-word repetition (NWR) tasks. However, the basis of this relationship remains unknown. Some suggest that NWR tasks measure phonological working memory, which then affects language growth. Others argue that children's knowledge of language/language experience affects NWR performance. A complicating factor is that most studies focus on school-aged children, who have already mastered key language skills. Here, we present a new NWR task for English-learning 2-year-olds, use it to assess the effect of NWR performance on concurrent and later vocabulary development, and compare the children's performance with that of an experience-based computational model (CLASSIC). The new NWR task produced reliable results; replicating wordlikeness effects, word-length effects, and the relationship with concurrent and later language ability we see in older children. The model also simulated all effects, suggesting that the relationship between vocabulary and NWR performance can be explained by language experience-/knowledge-based theories. -
Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.
Abstract
Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing. -
Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.
Abstract
Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition. -
Sánchez-de la Vega, G., Gasca-Pineda, J., Martínez-Cárdenas, A., Vernes, S. C., Teeling, E. C., Mai, M., Aguirre-Planter, E., Eguiarte, L. E., Phillips, C. D., & Ortega, J. (2024). The genome sequence of the endemic Mexican common mustached Bat, Pteronotus mexicanus. Miller, 1902 [Mormoopidae; Pteronotus]. Gene, 929: 148821. doi:10.1016/j.gene.2024.148821.
Abstract
We describe here the first characterization of the genome of the bat Pteronotus mexicanus, an endemic species of Mexico, as part of the Mexican Bat Genome Project which focuses on the characterization and assembly of the genomes of endemic bats in Mexico. The genome was assembled from a liver tissue sample of an adult male from Jalisco, Mexico provided by the Texas Tech University Museum tissue collection. The assembled genome size was 1.9 Gb. The assembly of the genome was fitted in a framework of 110,533 scaffolds and 1,659,535 contigs. The ecological importance of bats such as P. mexicanus, and their diverse ecological roles, underscores the value of having complete genomes in addressing information gaps and facing challenges regarding their function in ecosystems and their conservation.Additional information
supplementary data -
Schijven, D., Soheili-Nezhad, S., Fisher, S. E., & Francks, C. (2024). Exome-wide analysis implicates rare protein-altering variants in human handedness. Nature Communications, 15: 2632. doi:10.1038/s41467-024-46277-w.
Abstract
Handedness is a manifestation of brain hemispheric specialization. Left-handedness occurs at increased rates in neurodevelopmental disorders. Genome-wide association studies have identified common genetic effects on handedness or brain asymmetry, which mostly involve variants outside protein-coding regions and may affect gene expression. Implicated genes include several that encode tubulins (microtubule components) or microtubule-associated proteins. Here we examine whether left-handedness is also influenced by rare coding variants (frequencies ≤ 1%), using exome data from 38,043 left-handed and 313,271 right-handed individuals from the UK Biobank. The beta-tubulin gene TUBB4B shows exome-wide significant association, with a rate of rare coding variants 2.7 times higher in left-handers than right-handers. The TUBB4B variants are mostly heterozygous missense changes, but include two frameshifts found only in left-handers. Other TUBB4B variants have been linked to sensorineural and/or ciliopathic disorders, but not the variants found here. Among genes previously implicated in autism or schizophrenia by exome screening, DSCAM and FOXP1 show evidence for rare coding variant association with left-handedness. The exome-wide heritability of left-handedness due to rare coding variants was 0.91%. This study reveals a role for rare, protein-altering variants in left-handedness, providing further evidence for the involvement of microtubules and disorder-relevant genes.Additional information
supplementary information reporting summary peer review file link to preprint -
Schreiner, M. S., Zettersten, M., Bergmann, C., Frank, M. C., Fritzsche, T., Gonzalez-Gomez, N., Hamlin, K., Kartushina, N., Kellier, D. J., Mani, N., Mayor, J., Saffran, J., Shukla, M., Silverstein, P., Soderstrom, M., & Lippold, M. (2024). Limited evidence of test-retest reliability in infant-directed speech preference in a large pre-registered infant experiment. Developmental Science, 27(6): e13551. doi:10.1111/desc.13551.
Abstract
est-retest reliability—establishing that measurements remain consistent across multiple testing sessions—is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants’ preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants’ speech preference (overall r = 0.09, 95% CI [−0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study’s effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. -
Seidlmayer, E., Melnychuk, T., Galke, L., Kühnel, L., Tochtermann, K., Schultz, C., & Förstner, K. U. (2024). Research topic displacement and the lack of interdisciplinarity: Lessons from the scientific response to COVID-19. Scientometrics, 129, 5141-5179. doi:10.1007/s11192-024-05132-x.
Abstract
Based on a large-scale computational analysis of scholarly articles, this study investigates the dynamics of interdisciplinary research in the first year of the COVID-19 pandemic. Thereby, the study also analyses the reorientation effects away from other topics that receive less attention due to the high focus on the COVID-19 pandemic. The study aims to examine what can be learned from the (failing) interdisciplinarity of coronavirus research and its displacing effects for managing potential similar crises at the scientific level. To explore our research questions, we run several analyses by using the COVID-19++ dataset, which contains scholarly publications, preprints from the field of life sciences, and their referenced literature including publications from a broad scientific spectrum. Our results show the high impact and topic-wise adoption of research related to the COVID-19 crisis. Based on the similarity analysis of scientific topics, which is grounded on the concept embedding learning in the graph-structured bibliographic data, we measured the degree of interdisciplinarity of COVID-19 research in 2020. Our findings reveal a low degree of research interdisciplinarity. The publications’ reference analysis indicates the major role of clinical medicine, but also the growing importance of psychiatry and social sciences in COVID-19 research. A social network analysis shows that the authors’ high degree of centrality significantly increases her or his degree of interdisciplinarity. -
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.
Abstract
During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.Additional information
link to preprint -
Sekine, K., & Özyürek, A. (2024). Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Frontiers in Psychology, 14: 1305562. doi:10.3389/fpsyg.2023.1305562.
Abstract
The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.Additional information
supplemental material -
Senft, G. (2024). Die IPrA, Helmut und ich. Wiener Linguistische Gazette, 97, 35-49.
Abstract
This contribution describes the beginning and the development of the professional and personal relationship between Helmut and the author which has been highly influenced by our joint membership in the International Pragmatics Association and by our activities in and for the IPrA. -
Serio, B., Hettwer, M. D., Wiersch, L., Bignardi, G., Sacher, J., Weis, S., Eickhoff, S. B., & Valk, S. L. (2024). Sex differences in functional cortical organization reflect differences in network topology rather than cortical morphometry. Nature Communications, 15: 7714. doi:10.1038/s41467-024-51942-1.
Abstract
Differences in brain size between the sexes are consistently reported. However, the consequences of this anatomical difference on sex differences in intrinsic brain function remain unclear. In the current study, we investigate whether sex differences in intrinsic cortical functional organization may be associated with differences in cortical morphometry, namely different measures of brain size, microstructure, and the geodesic distance of connectivity profiles. For this, we compute a low dimensional representation of functional cortical organization, the sensory-association axis, and identify widespread sex differences. Contrary to our expectations, sex differences in functional organization do not appear to be systematically associated with differences in total surface area, microstructural organization, or geodesic distance, despite these morphometric properties being per se associated with functional organization and differing between sexes. Instead, functional sex differences in the sensory-association axis are associated with differences in functional connectivity profiles and network topology. Collectively, our findings suggest that sex differences in functional cortical organization extend beyond sex differences in cortical morphometry.Additional information
41467_2024_51942_MOESM1_ESM.pdf -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.
Abstract
Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers. -
Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.
Abstract
Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.
Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.
Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).
Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.
Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children. -
Silva-Nasser, C. G. A. d. (2024). An analysis on the notion of perspective of the conceptual metaphor MULHER É PIRANHA in song lyrics. Antares Letras e Humanidades, 16(37).
Abstract
This article aims to discuss the specific metaphor MULHER É PIRANHA, which is part of the general metaphor SER HUMANO É ANIMAL, besides the perspective role in its usage in song lyrics in Brazilian Portuguese, under the Conceptual Metaphor Theory. For this purpose, we discuss about metaphor and perspective according to Lakoff and Johnson (1980), followed by the cultural model The Great Chain of Being (LAKOFF; TURNER, 1989). We discuss about animal metaphors and its use, marking the notion of perspective, specially in the treatment given to animal metaphors related to women. We then do a metaphoric mapping and illustrate the points in the mapping with popular song lyrics. When the metaphor "piranha" is used to describe a man, it brings positive features, and we discuss about the cognitive reasons for so in the CMT. We argue that the conceptual metaphor MULHER É PIRANHA is modulated by perspective. -
Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
-
Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.Additional information
supplementary data -
Slim, M. S., Kandel, M., Yacovone, A., & Snedeker, J. (2024). Webcams as windows to the mind?: Adirect comparison between in-lab and web-based eye-tracking methods. Open Mind: Discoveries in Cognitive Science, 8, 1369-1424. doi:10.1162/opmi_a_00171.
Abstract
There is a growing interest in the use of webcams to conduct eye-tracking experiments over the internet. We assessed the performance of two webcam-based eye-tracking techniques for behavioral research: manual annotation of webcam videos (manual eye-tracking) and the
automated WebGazer eye-tracking algorithm. We compared these methods to a traditional
infrared eye-tracker and assessed their performance in both lab and web-based settings. In
both lab and web experiments, participants completed the same battery of five tasks, selected
to trigger effects of various sizes: two visual fixation tasks and three visual world tasks testing
real-time (psycholinguistic) processing effects. In the lab experiment, we simultaneously collected infrared eye-tracking, manual eye-tracking, and WebGazer data; in the web experiment, we simultaneously collected manual eye-tracking and WebGazer data. We found that the two webcam-based methods are suited to capture different types of eye-movement patterns. Manual eye-tracking, similar to infrared eye-tracking, detected both large and small effects. WebGazer, however, showed less accuracy in detecting short, subtle effects. There was no notable effect of setting for either method. We discuss the trade-offs researchers face when choosing eye-tracking methods and offer advice for conducting eye-tracking experiments over the internet.Additional information
Data, analysis code, and Supplementary Materials -
Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics, 27(1), 116-124. doi:10.1075/sll.00084.slo.
-
Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H. Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H., Kartushina, N., Kammermeier, M., Jessop, A., Hay, J. F., Hannon, E. E., Hamlin, J. K., Havron, N., Gonzalez-Gomez, N., Gampe, A., Fritzsche, T., Frank, M. C., Durrant, S., Davies, C., Cashon, C., Byers-Heinlein, K., Black, A. K., Bergmann, C., Anderson, L., Alshakhori, M. K., Al-Hoorie, A. H., & Tsui, A. S. M. (2024). Testing the relationship between preferences for infant-directed speech and vocabulary development: A multi-lab study. Journal of Child Language. Advance online publication. doi:10.1017/S0305000924000254.
Abstract
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.Additional information
supplementary material -
Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.
Abstract
Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging. -
Soheili-Nezhad, S., Schijven, D., Mars, R. B., Fisher, S. E., & Francks, C. (2024). Distinct impact modes of polygenic disposition to dyslexia in the adult brain. Science Advances, 10(51): eadq2754. doi:10.1126/sciadv.adq2754.
Abstract
Dyslexia is a common condition that impacts reading ability. Identifying affected brain networks has been hampered by limited sample sizes of imaging case-control studies. We focused instead on brain structural correlates of genetic disposition to dyslexia in large-scale population data. In over 30,000 adults (UK Biobank), higher polygenic disposition to dyslexia was associated with lower head and brain size, and especially reduced volume and/or altered fiber density in networks involved in motor control, language and vision. However, individual genetic variants disposing to dyslexia often had quite distinct patterns of association with brain structural features. Independent component analysis applied to brain-wide association maps for thousands of dyslexia-disposing genetic variants revealed multiple impact modes on the brain, that corresponded to anatomically distinct areas with their own genomic profiles of association. Polygenic scores for dyslexia-related cognitive and educational measures, as well as attention-deficit/hyperactivity disorder, showed similarities to dyslexia polygenic disposition in terms of brain-wide associations, with microstructure of the internal capsule consistently implicated. In contrast, lower volume of the primary motor cortex was only associated with higher dyslexia polygenic disposition among all traits. These findings robustly reveal heterogeneous neurobiological aspects of dyslexia genetic disposition, and whether they are shared or unique with respect to other genetically correlated traits.Additional information
link to preprint -
Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.
Abstract
Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning. -
Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.
Abstract
When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named. -
Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.
Abstract
This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.Additional information
appendix 1-3 -
Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.
Abstract
From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation. -
Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.
Abstract
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.
Abstract
Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation. -
Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction. -
Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.
Abstract
Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity. -
Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.
Abstract
Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.Additional information
supplementary information -
Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.
Abstract
Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production. -
Titus, A., & Peeters, D. (2024). Multilingualism at the market: A pre-registered immersive virtual reality study of bilingual language switching. Journal of Cognition, 7(1), 24-35. doi:10.5334/joc.359.
Abstract
Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals’ capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory. -
Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.
Abstract
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.Additional information
41598_2024_52589_MOESM1_ESM.docx -
Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.
Abstract
The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)Additional information
Data availability -
Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.
Abstract
Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.Additional information
Data availabillity -
Ünal, E., Wilson, F., Trueswell, J., & Papafragou, A. (2024). Asymmetries in encoding event roles: Evidence from language and cognition. Cognition, 250: 105868. doi:10.1016/j.cognition.2024.105868.
Abstract
It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar—though not identical—across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition. -
Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.
Abstract
How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner. -
Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.
Abstract
thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study. -
van der Burght, C. L., & Meyer, A. S. (2024). Semantic interference across word classes during lexical selection in Dutch. Cognition, 254: 105999. doi:10.1016/j.cognition.2024.105999.
Abstract
Using a novel version of the picture-word interference paradigm, Momma, Buffinton, Slevc, and Phillips (2020, Cognition) showed that word class constrained which words competed with each other for lexical selection. Specifically, in speakers of American English, action verbs (as in she’s singing) competed with semantically related action verbs (as in she’s whistling), but not with semantically related action nouns (as in her whistling). Similarly, action nouns only competed with semantically related action nouns, but not with action verbs. As this pattern has important implications for models of lexical access and sentence generation, we conducted a conceptual replication in Dutch. We found a semantic interference effect, however, contrary to the original study, no evidence for a word class constraint. Together, the results of the two studies argue for graded rather than categorical word class constraints on lexical selection. -
Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.
Abstract
There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.Additional information
supplementary materials -
Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.
Abstract
Background
The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).
Methods
We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.
Results
Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.
Conclusions
The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits. -
Wagner, M. A., Broersma, M., McQueen, J. M., Van Hout, R., & Lemhöfer, K. (2024). The case for a quantitative approach to the study of nonnative accent features. Language and Speech. Advance online publication. doi:10.1177/00238309241256653.
Abstract
Research with nonnative speech spans many different linguistic branches and topics. Most studies include one or a few well-known features of a particular accent. However, due to a lack of empirical studies, little is known about how common these features are among nonnative speakers or how uncommon they are among native speakers. Moreover, it remains to be seen whether findings from such studies generalize to lesser-known features. Here, we demonstrate a quantitative approach to study nonnative accent features using Dutch-accented English as an example. By analyzing the phonetic distances between transcriptions of speech samples, this approach can identify the features that best distinguish nonnative from native speech. In addition, we describe a method to test hypotheses about accent features by checking whether the prevalence of the features overall varies between native and nonnative speakers. Furthermore, we include English speakers from the United States and United Kingdom and native Dutch speakers from Belgium and The Netherlands to address the issue of regional accent variability in both the native and target language. We discuss the results concerning three observed features. Overall, the results provide empirical support for some well-known features of Dutch-accented English, but suggest that others may be infrequent among nonnatives or in fact frequent among natives. In addition, the findings reveal potentially new accent features, and factors that may modulate the expression of known features. Our study demonstrates a fruitful approach to study nonnative accent features that has the potential to expand our understanding of the phenomenon of accent. -
Wang, X., Jahagirdar, S., Bakker, W., Lute, C., Kemp, B., Knegsel, A. v., & Saccenti, E. (2024). Discrimination of Lipogenic or Glucogenic Diet Effects in Early-Lactation Dairy Cows Using Plasma Metabolite Abundances and Ratios in Combination with Machine Learning. Metabolites, 14(4): 230. doi:10.3390/metabo14040230.
Abstract
During early lactation, dairy cows have a negative energy balance since their energy demands exceed their energy intake: in this study, we aimed to investigate the association between diet and plasma metabolomics profiles and how these relate to energy unbalance of course in the early-lactation stage. Holstein-Friesian cows were randomly assigned to a glucogenic (n = 15) or lipogenic (n = 15) diet in early lactation. Blood was collected in week 2 and week 4 after calving. Plasma metabolite profiles were detected using liquid chromatography–mass spectrometry (LC-MS), and a total of 39 metabolites were identified. Two plasma metabolomic profiles were available every week for each cow. Metabolite abundance and metabolite ratios were used for the analysis using the XGboost algorithm to discriminate between diet treatment and lactation week. Using metabolite ratios resulted in better discrimination performance compared with the metabolite abundances in assigning cows to a lipogenic diet or a glucogenic diet. The quality of the discrimination of performance of lipogenic diet and glucogenic diet effects improved from 0.606 to 0.753 and from 0.696 to 0.842 in week 2 and week 4 (as measured by area under the curve, AUC), when the metabolite abundance ratios were used instead of abundances. The top discriminating ratios for diet were the ratio of arginine to tyrosine and the ratio of aspartic acid to valine in week 2 and week 4, respectively. For cows fed the lipogenic diet, choline and the ratio of creatinine to tryptophan were top features to discriminate cows in week 2 vs. week 4. For cows fed the glucogenic diet, methionine and the ratio of 4-hydroxyproline to choline were top features to discriminate dietary effects in week 2 or week 4. This study shows the added value of using metabolite abundance ratios to discriminate between lipogenic and glucogenic diet and lactation weeks in early-lactation cows when using metabolomics data. The application of this research will help to accurately regulate the nutrition of lactating dairy cows and promote sustainable agricultural development. -
Wang, M.-Y., Korbmacher, M., Eikeland, R., Craven, A. R., & Specht, K. (2024). The intra‐individual reliability of 1H‐MRS measurement in the anterior cingulate cortex across 1 year. Human Brain Mapping, 45(1): e26531. doi:10.1002/hbm.26531.
Abstract
Magnetic resonance spectroscopy (MRS) is the primary method that can measure the levels of metabolites in the brain in vivo. To achieve its potential in clinical usage, the reliability of the measurement requires further articulation. Although there are many studies that investigate the reliability of gamma-aminobutyric acid (GABA), comparatively few studies have investigated the reliability of other brain metabolites, such as glutamate (Glu), N-acetyl-aspartate (NAA), creatine (Cr), phosphocreatine (PCr), or myo-inositol (mI), which all play a significant role in brain development and functions. In addition, previous studies which predominately used only two measurements (two data points) failed to provide the details of the time effect (e.g., time-of-day) on MRS measurement within subjects. Therefore, in this study, MRS data located in the anterior cingulate cortex (ACC) were repeatedly recorded across 1 year leading to at least 25 sessions for each subject with the aim of exploring the variability of other metabolites by using the index coefficient of variability (CV); the smaller the CV, the more reliable the measurements. We found that the metabolites of NAA, tNAA, and tCr showed the smallest CVs (between 1.43% and 4.90%), and the metabolites of Glu, Glx, mI, and tCho showed modest CVs (between 4.26% and 7.89%). Furthermore, we found that the concentration reference of the ratio to water results in smaller CVs compared to the ratio to tCr. In addition, we did not find any time-of-day effect on the MRS measurements. Collectively, the results of this study indicate that the MRS measurement is reasonably reliable in quantifying the levels of metabolites. -
Wang, J., Schiller, N. O., & Verdonschot, R. G. (2024). Morphological encoding in language production: Electrophysiological evidence from Mandarin Chinese compound words. PLOS ONE, 19(10): e0310816. doi:10.1371/journal.pone.0310816.
Abstract
This study investigates the role of morphology during speech planning in Mandarin Chinese. In a long-lag priming experiment, thirty-two Mandarin Chinese native speakers were asked to name target pictures (e.g., “山” /shan1/ "mountain"). The design involved pictures referring to morpheme-related compound words (e.g., “山羊” /shan1yang2/ "goat") sharing a morpheme with the first (e.g., “山” /shan1/ "mountain") or the second position of the targets (e.g., 脑 /nao3/ “brain” with prime电脑 /dian4nao3/ “computer”), as well as unrelated control items. Behavioral and electrophysiological data were collected. Interestingly, the behavioral results went against earlier findings in Indo-European languages, showing that the target picture naming was not facilitated by morphologically related primes. This suggests no morphological priming for individual constituents in producing Mandarin Chinese disyllabic compound words. However, targets in the morpheme-related word condition did elicit a reduced N400 compared with targets in the morpheme-unrelated condition for the first position overlap in the ERP analyses but not for the second, suggesting automatic activation of the first individual constituent in noun compound production. Implications of these findings are discussed. -
Wang, J., Schiller, N. O., & Verdonschot, R. G. (2024). Word and morpheme frequency effects in naming Mandarin Chinese compounds: More than a replication. Brain and Language, 259: 105496. doi:10.1016/j.bandl.2024.105496.
Abstract
The question whether compound words are stored in our mental lexicon in a decomposed or full-listing way prompted Janssen and colleagues (2008) to investigate the representation of compounds using word and morpheme frequencies manipulations. Our study replicated their study using a new set of stimuli from a spoken corpus and incorporating EEG data for a more detailed investigation. In the current study, despite ERP analyses revealing no word frequency or morpheme frequency effects across conditions, behavioral outcomes indicated that Mandarin compounds are not sensitive to word frequency. Instead, the response times highlighted a morpheme frequency effect in naming Mandarin compounds, which contrasted with the findings of Janssen and colleagues. These findings challenge the full-listing model and instead support the decompositional model. -
Weissbart, H., & Martin, A. E. (2024). The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nature Communications, 15: 8850. doi:10.1038/s41467-024-53128-1.
Abstract
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain’s ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism -
Wesseldijk, L. W., Henechowicz, T. L., Baker, D. J., Bignardi, G., Karlsson, R., Gordon, R. L., Mosing, M. A., Ullén, F., & Fisher, S. E. (2024). Notes from Beethoven’s genome. Current Biology, 34(6), R233-R234. doi:10.1016/j.cub.2024.01.025.
Abstract
Rapid advances over the last decade in DNA sequencing and statistical genetics enable us to investigate the genomic makeup of individuals throughout history. In a recent notable study, Begg et al.1 used Ludwig van Beethoven’s hair strands for genome sequencing and explored genetic predispositions for some of his documented medical issues. Given that it was arguably Beethoven’s skills as a musician and composer that made him an iconic figure in Western culture, we here extend the approach and apply it to musicality. We use this as an example to illustrate the broader challenges of individual-level genetic predictions.Additional information
supplemental information -
Winter, B., Lupyan, G., Perry, L. K., Dingemanse, M., & Perlman, M. (2024). Iconicity ratings for 14,000+ English words. Behavior Research Methods, 56, 1640-1655. doi:10.3758/s13428-023-02112-6.
Abstract
Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available. -
Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Domain-general and language-specific contributions to speech production in a second language: An fMRI study using functional localizers. Scientific Reports, 14: 57. doi:10.1038/s41598-023-49375-9.
Abstract
For bilinguals, speaking in a second language (L2) compared to the native language (L1) is usually more difficult. In this study we asked whether the difficulty in L2 production reflects increased demands imposed on domain-general or core language mechanisms. We compared the brain response to speech production in L1 and L2 within two functionally-defined networks in the brain: the Multiple Demand (MD) network and the language network. We found that speech production in L2 was linked to a widespread increase of brain activity in the domain-general MD network. The language network did not show a similarly robust differences in processing speech in the two languages, however, we found increased response to L2 production in the language-specific portion of the left inferior frontal gyrus (IFG). To further explore our results, we have looked at domain-general and language-specific response within the brain structures postulated to form a Bilingual Language Control (BLC) network. Within this network, we found a robust increase in response to L2 in the domain-general, but also in some language-specific voxels including in the left IFG. Our findings show that L2 production strongly engages domain-general mechanisms, but only affects language sensitive portions of the left IFG. These results put constraints on the current model of bilingual language control by precisely disentangling the domain-general and language-specific contributions to the difficulty in speech production in L2.Additional information
supplementary materials -
Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Tracking components of bilingual language control in speech production: An fMRI study using functional localizers. Neurobiology of Language, 5(2), 315-340. doi:10.1162/nol_a_00128.
Abstract
When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the L2 after-effect. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish–English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm—manifested as the L2 after-effect—reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.Additional information
supplementary materials -
Wong, M. M. K., Sha, Z., Lütje, L., Kong, X., Van Heukelum, S., Van de Berg, W. D. J., Jonkman, L. E., Fisher, S. E., & Francks, C. (2024). The neocortical infrastructure for language involves region-specific patterns of laminar gene expression. Proceedings of the National Academy of Sciences of the United States of America, 121(34): e2401687121. doi:10.1073/pnas.2401687121.
Abstract
The language network of the human brain has core components in the inferior frontal cortex and superior/middle temporal cortex, with left-hemisphere dominance in most people. Functional specialization and interconnectivity of these neocortical regions is likely to be reflected in their molecular and cellular profiles. Excitatory connections between cortical regions arise and innervate according to layer-specific patterns. Here we generated a new gene expression dataset from human postmortem cortical tissue samples from core language network regions, using spatial transcriptomics to discriminate gene expression across cortical layers. Integration of these data with existing single-cell expression data identified 56 genes that showed differences in laminar expression profiles between frontal and temporal language cortex together with upregulation in layer II/III and/or layer V/VI excitatory neurons. Based on data from large-scale genome-wide screening in the population, DNA variants within these 56 genes showed set-level associations with inter-individual variation in structural connectivity between left-hemisphere frontal and temporal language cortex, and with predisposition to dyslexia. The axon guidance genes SLIT1 and SLIT2 were consistently implicated. These findings identify region-specific patterns of laminar gene expression as a feature of the brain’s language network. -
Yang, J. (2024). Rethinking tokenization: Crafting better tokenizers for large language models. International Journal of Chinese Linguistics, 11(1), 94-109. doi:10.1075/ijchl.00023.yan.
Abstract
Tokenization significantly influences language models (LMs)’ performance. This paper traces the evolution of tokenizers from word-level to subword-level, analyzing how they balance tokens and types to enhance model adaptability while controlling complexity. Despite subword tokenizers like Byte Pair Encoding (BPE) overcoming many word tokenizer limitations, they encounter difficulties in handling non-Latin languages and depend heavily on extensive training data and computational resources to grasp the nuances of multiword expressions (MWEs). This article argues that tokenizers, more than mere technical tools, should drawing inspiration from the cognitive science about human language processing. This study then introduces the “Principle of Least Effort” from cognitive science, that humans naturally seek to reduce cognitive effort, and discusses the benefits of this principle for tokenizer development. Based on this principle, the paper proposes that the Less-is-Better (LiB) model could be a new approach for LLM tokenizer. The LiB model can autonomously learn an integrated vocabulary consisting of subwords, words, and MWEs, which effectively reduces both the numbers of tokens and types. Comparative evaluations show that the LiB tokenizer outperforms existing word and BPE tokenizers, presenting an innovative method for tokenizer development, and hinting at the possibility of future cognitive science-based tokenizers being more efficient. -
Zettersten, M., Cox, C., Bergmann, C., Tsui, A. S. M., Soderstrom, M., Mayor, J., Lundwall, R. A., Lewis, M., Kosie, J. E., Kartushina, N., Fusaroli, R., Frank, M. C., Byers-Heinlein, K., Black, A. K., & Mathur, M. B. (2024). Evidence for infant-directed speech preference is consistent across large-scale, multi-site replication and meta-analysis. Open Mind, 8, 439-461. doi:10.1162/opmi_a_00134.
Abstract
There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena. -
He, J., & Zhang, Q. (2024). Direct retrieval of orthographic representations in Chinese handwritten production: Evidence from a dynamic causal modeling study. Journal of Cognitive Neuroscience, 36(9), 1937-1962. doi:10.1162/jocn_a_02176.
Abstract
This present study identified an optimal model representing the relationship between orthography and phonology in Chinese handwritten production using dynamic causal modeling, and further explored how this model was modulated by word frequency and syllable frequency. Each model contained five volumes of interest in the left hemisphere (angular gyrus [AG], inferior frontal gyrus [IFG], middle frontal gyrus [MFG], superior frontal gyrus [SFG], and supramarginal gyrus [SMG]), with the IFG as the driven input area. Results showed the superiority of a model in which both the MFG and the AG connected with the IFG, supporting the orthography autonomy hypothesis. Word frequency modulated the AG → SFG connection (information flow from the orthographic lexicon to the orthographic buffer), and syllable frequency affected the IFG → MFG connection (information transmission from the semantic system to the phonological lexicon). This study thus provides new insights into the connectivity architecture of neural substrates involved in writing. -
Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.
Abstract
Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.Additional information
full stimulus list, the raw EEG data, and the analysis scripts -
Zhou, H., Van der Ham, S., De Boer, B., Bogaerts, L., & Raviv, L. (2024). Modality and stimulus effects on distributional statistical learning: Sound vs. sight, time vs. space. Journal of Memory and Language, 138: 104531. doi:10.1016/j.jml.2024.104531.
Abstract
Statistical learning (SL) is postulated to play an important role in the process of language acquisition as well as in other cognitive functions. It was found to enable learning of various types of statistical patterns across different sensory modalities. However, few studies have distinguished distributional SL (DSL) from sequential and spatial SL, or examined DSL across modalities using comparable tasks. Considering the relevance of such findings to the nature of SL, the current study investigated the modality- and stimulus-specificity of DSL. Using a within-subject design we compared DSL performance in auditory and visual modalities. For each sensory modality, two stimulus types were used: linguistic versus non-linguistic auditory stimuli and temporal versus spatial visual stimuli. In each condition, participants were exposed to stimuli that varied in their length as they were drawn from two categories (short versus long). DSL was assessed using a categorization task and a production task. Results showed that learners’ performance was only correlated for tasks in the same sensory modality. Moreover, participants were better at categorizing the temporal signals in the auditory conditions than in the visual condition, where in turn an advantage of the spatial condition was observed. In the production task participants exaggerated signal length more for linguistic signals than non-linguistic signals. Together, these findings suggest that DSL is modality- and stimulus-sensitive.Additional information
link to preprint -
Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.
Abstract
Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production. -
Abbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A. and 13 moreAbbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A., Fisher, S. E., Barr, C. L., Michaelson, J. J., Boomsma, D. I., Snowling, M. J., Hulme, C., Whitehouse, A. J. O., Pennell, C. E., Newbury, D. F., Stein, J., Talcott, J. B., Bishop, D. V. M., & Paracchini, S. (2023). Language and reading impairments are associated with increased prevalence of non‐right‐handedness. Child Development, 94(4), 970-984. doi:10.1111/cdev.13914.
Abstract
Handedness has been studied for association with language-related disorders because of its link with language hemispheric dominance. No clear pattern has emerged, possibly because of small samples, publication bias, and heterogeneous criteria across studies. Non-right-handedness (NRH) frequency was assessed in N = 2503 cases with reading and/or language impairment and N = 4316 sex-matched controls identified from 10 distinct cohorts (age range 6–19 years old; European ethnicity) using a priori set criteria. A meta-analysis (Ncases = 1994) showed elevated NRH % in individuals with language/reading impairment compared with controls (OR = 1.21, CI = 1.06–1.39, p = .01). The association between reading/language impairments and NRH could result from shared pathways underlying brain lateralization, handedness, and cognitive functions.Additional information
supplementary information -
Alhama, R. G., Rowland, C. F., & Kidd, E. (2023). How does linguistic context influence word learning? Journal of Child Language, 50(6), 1374-1393. doi:10.1017/S0305000923000302.
Abstract
While there are well-known demonstrations that children can use distributional information to acquire multiple components of language, the underpinnings of these achievements are unclear. In the current paper, we investigate the potential pre-requisites for a distributional learning model that can explain how children learn their first words. We review existing literature and then present the results of a series of computational simulations with Vector Space Models, a type of distributional semantic model used in Computational Linguistics, which we evaluate against vocabulary acquisition data from children. We focus on nouns and verbs, and we find that: (i) a model with flexibility to adjust for the frequency of events provides a better fit to the human data, (ii) the influence of context words is very local, especially for nouns, and (iii) words that share more contexts with other words are harder to learn. -
Anichini, M., de Reus, K., Hersh, T. A., Valente, D., Salazar-Casals, A., Berry, C., Keller, P. E., & Ravignani, A. (2023). Measuring rhythms of vocal interactions: A proof of principle in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210477. doi:10.1098/rstb.2021.0477.
Abstract
Rhythmic patterns in interactive contexts characterize human behaviours such as conversational turn-taking. These timed patterns are also present in other animals, and often described as rhythm. Understanding fine-grained temporal adjustments in interaction requires complementary quantitative methodologies. Here, we showcase how vocal interactive rhythmicity in a non-human animal can be quantified using a multi-method approach. We record vocal interactions in harbour seal pups (Phoca vitulina) under controlled conditions. We analyse these data by combining analytical approaches, namely categorical rhythm analysis, circular statistics and time series analyses. We test whether pups' vocal rhythmicity varies across behavioural contexts depending on the absence or presence of a calling partner. Four research questions illustrate which analytical approaches are complementary versus orthogonal. For our data, circular statistics and categorical rhythms suggest that a calling partner affects a pup's call timing. Granger causality suggests that pups predictively adjust their call timing when interacting with a real partner. Lastly, the ADaptation and Anticipation Model estimates statistical parameters for a potential mechanism of temporal adaptation and anticipation. Our analytical complementary approach constitutes a proof of concept; it shows feasibility in applying typically unrelated techniques to seals to quantify vocal rhythmic interactivity across behavioural contexts.Additional information
supplemental information -
Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.
Abstract
There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.Additional information
supplementary text -
Assmann, M., Büring, D., Jordanoska, I., & Prüller, M. (2023). Towards a theory of morphosyntactic focus marking. Natural Language & Linguistic Theory. doi:10.1007/s11049-023-09567-4.
Abstract
Based on six detailed case studies of languages in which focus is marked morphosyntactically, we propose a novel formal theory of focus marking, which can capture these as well as the familiar English-type prosodic focus marking. Special attention is paid to the patterns of focus syncretism, that is, when different size and/or location of focus are indistinguishably realized by the same form.
The key ingredients to our approach are that complex constituents (not just words) may be directly focally marked, and that the choice of focal marking is governed by blocking. -
Barak, L., Harmon, Z., Feldman, N. H., Edwards, J., & Shafto, P. (2023). When children's production deviates from observed input: Modeling the variable production of the English past tense. Cognitive Science, 47(8): e13328. doi:10.1111/cogs.13328.
Abstract
As children gradually master grammatical rules, they often go through a period of producing form-meaning associations that were not observed in the input. For example, 2- to 3-year-old English-learning children use the bare form of verbs in settings that require obligatory past tense meaning while already starting to produce the grammatical –ed inflection. While many studies have focused on overgeneralization errors, fewer studies have attempted to explain the root of this earlier stage of rule acquisition. In this work, we use computational modeling to replicate children's production behavior prior to the generalization of past tense production in English. We illustrate how seemingly erroneous productions emerge in a model, without being licensed in the grammar and despite the model aiming at conforming to grammatical forms. Our results show that bare form productions stem from a tension between two factors: (1) trying to produce a less frequent meaning (the past tense) and (2) being unable to restrict the production of frequent forms (the bare form) as learning progresses. Like children, our model goes through a stage of bare form production and then converges on adult-like production of the regular past tense, showing that these different stages can be accounted for through a single learning mechanism. -
Barendse, M. T., & Rosseel, Y. (2023). Multilevel SEM with random slopes in discrete data using the pairwise maximum likelihood. British Journal of Mathematical and Statistical Psychology, 76(2), 327-352. doi:10.1111/bmsp.12294.
Abstract
Pairwise maximum likelihood (PML) estimation is a promising method for multilevel models with discrete responses. Multilevel models take into account that units within a cluster tend to be more alike than units from different clusters. The pairwise likelihood is then obtained as the product of bivariate likelihoods for all within-cluster pairs of units and items. In this study, we investigate the PML estimation method with computationally intensive multilevel random intercept and random slope structural equation models (SEM) in discrete data. In pursuing this, we first reconsidered the general ‘wide format’ (WF) approach for SEM models and then extend the WF approach with random slopes. In a small simulation study we the determine accuracy and efficiency of the PML estimation method by varying the sample size (250, 500, 1000, 2000), response scales (two-point, four-point), and data-generating model (mediation model with three random slopes, factor model with one and two random slopes). Overall, results show that the PML estimation method is capable of estimating computationally intensive random intercept and random slopes multilevel models in the SEM framework with discrete data and many (six or more) latent variables with satisfactory accuracy and efficiency. However, the condition with 250 clusters combined with a two-point response scale shows more bias.Additional information
figures -
Barrios, A., & Garcia, R. (2023). Filipino children’s acquisition of nominal and verbal markers in L1 and L2 Tagalog. Languages, 8(3): 188. doi:10.3390/languages8030188.
Abstract
Western Austronesian languages, like Tagalog, have unique, complex voice systems that require the correct combinations of verbal and nominal markers, raising many questions about their learnability. In this article, we review the experimental and observational studies on both the L1 and L2 acquisition of Tagalog. The reviewed studies reveal error patterns that reflect the complex nature of the Tagalog voice system. The main goal of the article is to present a full picture of commission errors in young Filipino children’s expression of causation and agency in Tagalog by describing patterns of nominal marking and voice marking in L1 Tagalog and L2 Tagalog. It also aims to provide an overview of existing research, as well as characterize research on nominal and verbal acquisition, specifically in terms of research problems, data sources, and methodology. Additionally, we discuss the research gaps in at least fifty years’ worth of studies in the area from the 1960’s to the present, as well as ideas for future research to advance the state of the art. -
Bastiaanse, R., & Ohlerth, A.-K. (2023). Presurgical language mapping: What are we testing? Journal of Personalized Medicine, 13: 376. doi:10.3390/jpm13030376.
Abstract
Gliomas are brain tumors infiltrating healthy cortical and subcortical areas that may host cognitive functions, such as language. If these areas are damaged during surgery, the patient might develop word retrieval or articulation problems. For this reason, many glioma patients are operated on awake, while their language functions are tested. For this practice, quite simple tests are used, for example, picture naming. This paper describes the process and timeline of picture naming (noun retrieval) and shows the timeline and localization of the distinguished stages. This is relevant information for presurgical language testing with navigated Magnetic Stimulation (nTMS). This novel technique allows us to identify cortical involved in the language production process and, thus, guides the neurosurgeon in how to approach and remove the tumor. We argue that not only nouns, but also verbs should be tested, since sentences are built around verbs, and sentences are what we use in daily life. This approach’s relevance is illustrated by two case studies of glioma patients. -
Bauer, B. L. M. (2023). Multiplication, addition, and subtraction in numerals: Formal variation in Latin’s decads+ from an Indo-European perspective. Journal of Latin Linguistics, 22(1), 1-56. doi:10.1515/joll-2023-2001.
Abstract
While formal variation in Latin’s numerals is generally acknowledged, little is known about (relative) incidence, distribution, context, or linguistic productivity. Addressing this lacuna, this article examines “decads+” in Latin, which convey the numbers between the full decads: the teens (‘eleven’ through ‘nineteen’) as well as the numerals between the higher decads starting at ‘twenty-one’ through ‘ninety-nine’. Latin’s decads+ are compounds and prone to variation. The data, which are drawn from a variety of sources, reveal (a) substantial formal variation in Latin, both internally and typologically; (b) co-existence of several types of formation; (c) productivity of potential borrowings; (d) resilience of early formations; (e) patterns in structure and incidence that anticipate the Romance numerals; and (f) historical trends. From a typological and general linguistic perspective as well, Latin’s decads+ are most relevant because their formal variation involves sequence, connector, and arithmetical operations and because their historical depth shows a gradual shift away from widespread formal variation, eventually resulting in the relatively rigid system found in Romance. Moreover, the combined system attested in decads+ in Latin – based on a combination of inherited, innovative and borrowed patterns and reflecting different stages of development – presents a number of typological inconsistencies that require further assessmentFiles private
Request files -
Benetti, S., Ferrari, A., & Pavani, F. (2023). Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Frontiers in Human Neuroscience, 17: 1108354. doi:10.3389/fnhum.2023.1108354.
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (“lateral processing pathway”). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model. -
Bergelson, E., Soderstrom, M., Schwarz, I.-C., Rowland, C. F., Ramírez-Esparza, N., Rague Hamrick, L., Marklund, E., Kalashnikova, M., Guez, A., Casillas, M., Benetti, L., Van Alphen, P. M., & Cristia, A. (2023). Everyday language input and production in 1,001 children from six continents. Proceedings of the National Academy of Sciences of the United States of America, 120(52): 2300671120. doi:10.1073/pnas.2300671120.
Abstract
Language is a universal human ability, acquired readily by young children, whootherwise struggle with many basics of survival. And yet, language ability is variableacross individuals. Naturalistic and experimental observations suggest that children’slinguistic skills vary with factors like socioeconomic status and children’s gender.But which factors really influence children’s day-to-day language use? Here, weleverage speech technology in a big-data approach to report on a unique cross-culturaland diverse data set: >2,500 d-long, child-centered audio-recordings of 1,001 2- to48-mo-olds from 12 countries spanning six continents across urban, farmer-forager,and subsistence-farming contexts. As expected, age and language-relevant clinical risksand diagnoses predicted how much speech (and speech-like vocalization) childrenproduced. Critically, so too did adult talk in children’s environments: Children whoheard more talk from adults produced more speech. In contrast to previous conclusionsbased on more limited sampling methods and a different set of language proxies,socioeconomic status (operationalized as maternal education) was not significantlyassociated with children’s productions over the first 4 y of life, and neither weregender or multilingualism. These findings from large-scale naturalistic data advanceour understanding of which factors are robust predictors of variability in the speechbehaviors of young learners in a wide range of everyday contexts -
Bögels, S., & Levinson, S. C. (2023). Ultrasound measurements of interactive turn-taking in question-answer sequences: Articulatory preparation is delayed but not tied to the response. PLoS One, 18: e0276470. doi:10.1371/journal.pone.0276470.
Abstract
We know that speech planning in conversational turn-taking can happen in overlap with the previous turn and research suggests that it starts as early as possible, that is, as soon as the gist of the previous turn becomes clear. The present study aimed to investigate whether planning proceeds all the way up to the last stage of articulatory preparation (i.e., putting the articulators in place for the first phoneme of the response) and what the timing of this process is. Participants answered pre-recorded quiz questions (being under the illusion that they were asked live), while their tongue movements were measured using ultrasound. Planning could start early for some quiz questions (i.e., midway during the question), but late for others (i.e., only at the end of the question). The results showed no evidence for a difference between tongue movements in these two types of questions for at least two seconds after planning could start in early-planning questions, suggesting that speech planning in overlap with the current turn proceeds more slowly than in the clear. On the other hand, when time-locking to speech onset, tongue movements differed between the two conditions from up to two seconds before this point. This suggests that articulatory preparation can occur in advance and is not fully tied to the overt response itself.Additional information
supporting information -
Wu, M., Bosker, H. R., & Riecke, L. (2023). Sentential contextual facilitation of auditory word processing builds up during sentence tracking. Journal of Cognitive Neuroscience, 35(8), 1262 -1278. doi:10.1162/jocn_a_02007.
Abstract
While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses 1(auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top–down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams. -
Bruggeman, L., & Cutler, A. (2023). Listening like a native: Unprofitable procedures need to be discarded. Bilingualism: Language and Cognition, 26(5), 1093-1102. doi:10.1017/S1366728923000305.
Abstract
Two languages, historically related, both have lexical stress, with word stress distinctions signalled in each by the same suprasegmental cues. In each language, words can overlap segmentally but differ in placement of primary versus secondary stress (OCtopus, ocTOber). However, secondary stress occurs more often in the words of one language, Dutch, than in the other, English, and largely because of this, Dutch listeners find it helpful to use suprasegmental stress cues when recognising spoken words. English listeners, in contrast, do not; indeed, Dutch listeners can outdo English listeners in correctly identifying the source words of English word fragments (oc-). Here we show that Dutch-native listeners who reside in an English-speaking environment and have become dominant in English, though still maintaining their use of these stress cues in their L1, ignore the same cues in their L2 English, performing as poorly in the fragment identification task as the L1 English do. -
Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.
Abstract
Introduction
Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.
Methods
The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.
Results
The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.
Conclusions
The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands. -
Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.
Abstract
Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking. -
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.
Abstract
Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.Additional information
supplementary material -
Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.
Abstract
Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events. -
Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.
Abstract
The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination. -
Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.
Abstract
Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury. -
Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.
Abstract
Purpose
Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.
Methods
60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.
Results
Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.
Conclusion
We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population. -
Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.
Abstract
Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.Additional information
supplementary information -
Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.
Abstract
Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.Additional information
supplementary material -
Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
-
Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.
Abstract
Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain. -
Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.
Abstract
Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
Share this page