Publications

Displaying 401 - 500 of 9436
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (Eds.). (2023). Marking the truth: A cross-linguistic approach to verum [Special Issue]. Zeitschrift für Sprachwissenschaft, 42(3). Retrieved from https://www.degruyter.com/journal/key/zfsw/42/3/html.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Kanakanti, M., Singh, S., & Shrivastava, M. (2023). MultiFacet: A multi-tasking framework for speech-to-sign language generation. In E. André, M. Chetouani, D. Vaufreydaz, G. Lucas, T. Schultz, L.-P. Morency, & A. Vinciarelli (Eds.), ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction (pp. 205-213). New York: ACM. doi:10.1145/3610661.3616550.

    Abstract

    Sign language is a rich form of communication, uniquely conveying meaning through a combination of gestures, facial expressions, and body movements. Existing research in sign language generation has predominantly focused on text-to-sign pose generation, while speech-to-sign pose generation remains relatively underexplored. Speech-to-sign language generation models can facilitate effective communication between the deaf and hearing communities. In this paper, we propose an architecture that utilises prosodic information from speech audio and semantic context from text to generate sign pose sequences. In our approach, we adopt a multi-tasking strategy that involves an additional task of predicting Facial Action Units (FAUs). FAUs capture the intricate facial muscle movements that play a crucial role in conveying specific facial expressions during sign language generation. We train our models on an existing Indian Sign language dataset that contains sign language videos with audio and text translations. To evaluate our models, we report Dynamic Time Warping (DTW) and Probability of Correct Keypoints (PCK) scores. We find that combining prosody and text as input, along with incorporating facial action unit prediction as an additional task, outperforms previous models in both DTW and PCK scores. We also discuss the challenges and limitations of speech-to-sign pose generation models to encourage future research in this domain. We release our models, results and code to foster reproducibility and encourage future research1.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Kaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E. and 10 moreKaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E., Ziegenfusz, S., Bennett, M. F., Robertson, E., Wang, L., Boys, A., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2023). Genetic aetiologies for childhood speech disorder: Novel pathways co-expressed during brain development. Molecular Psychiatry, 28, 1647-1663. doi:10.1038/s41380-022-01764-8.

    Abstract

    Childhood apraxia of speech (CAS), the prototypic severe childhood speech disorder, is characterized by motor programming and planning deficits. Genetic factors make substantive contributions to CAS aetiology, with a monogenic pathogenic variant identified in a third of cases, implicating around 20 single genes to date. Here we aimed to identify molecular causation in 70 unrelated probands ascertained with CAS. We performed trio genome sequencing. Our bioinformatic analysis examined single nucleotide, indel, copy number, structural and short tandem repeat variants. We prioritised appropriate variants arising de novo or inherited that were expected to be damaging based on in silico predictions. We identified high confidence variants in 18/70 (26%) probands, almost doubling the current number of candidate genes for CAS. Three of the 18 variants affected SETBP1, SETD1A and DDX3X, thus confirming their roles in CAS, while the remaining 15 occurred in genes not previously associated with this disorder. Fifteen variants arose de novo and three were inherited. We provide further novel insights into the biology of child speech disorder, highlighting the roles of chromatin organization and gene regulation in CAS, and confirm that genes involved in CAS are co-expressed during brain development. Our findings confirm a diagnostic yield comparable to, or even higher, than other neurodevelopmental disorders with substantial de novo variant burden. Data also support the increasingly recognised overlaps between genes conferring risk for a range of neurodevelopmental disorders. Understanding the aetiological basis of CAS is critical to end the diagnostic odyssey and ensure affected individuals are poised for precision medicine trials.
  • Kendrick, K. H., Holler, J., & Levinson, S. C. (2023). Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210473. doi:10.1098/rstb.2021.0473.

    Abstract

    Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature.

    Additional information

    supplemental material
  • Kholodova, A., Peter, M., Rowland, C. F., Jacob, G., & Allen, S. E. M. (2023). Abstract priming and the lexical boost effect across development in a structurally biased language. Languages, 8: 264. doi:10.3390/languages8040264.

    Abstract

    The present study investigates the developmental trajectory of abstract representations for syntactic structures in children. In a structural priming experiment on the dative alternation in German, we primed children from three different age groups (3–4 years, 5–6 years, 7–8 years) and adults with double object datives (Dora sent Boots the rabbit) or prepositional object datives (Dora sent the rabbit to Boots). Importantly, the prepositional object structure in German is dispreferred and only rarely encountered by young children. While immediate as well as cumulative structural priming effects occurred across all age groups, these effects were strongest in the 3- to 4-year-old group and gradually decreased with increasing age. These results suggest that representations in young children are less stable than in adults and, therefore, more susceptible to adaptation both immediately and across time, presumably due to stronger surprisal. Lexical boost effects, in contrast, were not present in 3- to 4-year-olds but gradually emerged with increasing age, possibly due to limited working-memory capacity in the younger child groups.
  • Kidd, E., Arciuli, J., Christiansen, M. H., & Smithson, M. (2023). The sources and consequences of individual differences in statistical learning for language development. Cognitive Development, 66: 101335. doi:10.1016/j.cogdev.2023.101335.

    Abstract

    Statistical learning (SL)—sensitivity to statistical regularities in the environment—has been postulated to support language development. While even young infants are capable of using distributional statistics to learn in linguistic and non-linguistic domains, efforts to measure SL at the level of the individual and link it to language proficiency in individual differences designs have been mixed, which has at least in part been attributed to problems with task reliability. In the current study we present the first prospective longitudinal study of the relationship between both non-linguistic SL (measured with visual stimuli) and linguistic SL (measured with auditory stimuli) and language in a group of English-speaking children. One-hundred and twenty-one (N = 121) children in their first two years of formal schooling (Mage = 6;1 years, Range: 5;2 – 7;2) completed tests of visual SL (VSL) and auditory SL (ASL) and several control variables at time 1. Both forms of SL were then measured every 6 months for the next 18 months, and at the final testing session (time 4) their language proficiency was measured using a standardised test. The results showed that the reliability of the SL tasks increased across the course of the study. A series of path analyses showed that both VSL and ASL independently predicted individual differences in language proficiency at time 4. The evidence is consistent with the suggestion that, when measured reliably, an observable relationship between SL and language proficiency exists. Theoretical and methodological issues are discussed.

    Additional information

    data and code
  • Kornfeld, L., & Rossi, G. (2023). Enforcing rules during play: Knowledge, agency, and the design of instructions and reminders. Research on Language and Social Interaction, 56(1), 42-64. doi:10.1080/08351813.2023.2170637.

    Abstract

    Rules of behavior are fundamental to human sociality. Whether on the road, at the dinner table, or during a game, people monitor one another’s behavior for conformity to rules and may take action to rectify violations. In this study, we examine two ways in which rules are enforced during games: instructions and reminders. Building on prior research, we identify instructions as actions produced to rectify violations based on another’s lack of knowledge of the relevant rule; knowledge that the instruction is designed to impart. In contrast to this, the actions we refer to as reminders are designed to enforce rules presupposing the transgressor’s competence and treating the violation as the result of forgetfulness or oversight. We show that instructing and reminding actions differ in turn design, sequential development, the epistemic stances taken by transgressors and enforcers, and in how the action affects the progressivity of the interaction. Data are in German and Italian from the Parallel European Corpus of Informal Interaction (PECII).
  • Kösem, A., Dai, B., McQueen, J. M., & Hagoort, P. (2023). Neural envelope tracking of speech does not unequivocally reflect intelligibility. NeuroImage, 272: 120040. doi:10.1016/j.neuroimage.2023.120040.

    Abstract

    During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
  • Lai, J., Chan, A., & Kidd, E. (2023). Relative clause comprehension in Cantonese-speaking children with and without developmental language disorder. PLoS One, 18: e0288021. doi:10.1371/journal.pone.0288021.

    Abstract

    Developmental Language Disorder (DLD), present in 2 out of every 30 children, affects primarily oral language abilities and development in the absence of associated biomedical conditions. We report the first experimental study that examines relative clause (RC) comprehension accuracy and processing (via looking preference) in Cantonese-speaking children with and without DLD, testing the predictions from competing domain-specific versus domain-general theoretical accounts. We compared children with DLD (N = 22) with their age-matched typically-developing (TD) children (AM-TD, N = 23) aged 6;6–9;7 and language-matched (and younger) TD children (YTD, N = 21) aged 4;7–7;6, using a referent selection task. Within-subject factors were: RC type (subject-RCs (SRCs) versus object-RCs (ORCs); relativizer (classifier (CL) versus relative marker ge3 RCs). Accuracy measures and looking preference to the target were analyzed using generalized linear mixed effects models. Results indicated Cantonese children with DLD scored significantly lower than their AM-TD peers in accuracy and processed RCs significantly slower than AM-TDs, but did not differ from the YTDs on either measure. Overall, while the results revealed evidence of a SRC advantage in the accuracy data, there was no indication of additional difficulty associated with ORCs in the eye-tracking data. All children showed a processing advantage for the frequent CL relativizer over the less frequent ge3 relativizer. These findings pose challenges to domain-specific representational deficit accounts of DLD, which primarily explain the disorder as a syntactic deficit, and are better explained by domain-general accounts that explain acquisition and processing as emergent properties of multiple converging linguistic and non-linguistic processes.

    Additional information

    S1 appendix
  • Laparle, S. (2023). Moving past the lexical affiliate with a frame-based analysis of gesture meaning. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527218.

    Abstract

    Interpreting the meaning of co-speech gesture often involves
    identifying a gesture’s ‘lexical affiliate’, the word or phrase to
    which it most closely relates (Schegloff 1984). Though there is
    work within gesture studies that resists this simplex mapping of
    meaning from speech to gesture (e.g. de Ruiter 2000; Kendon
    2014; Parrill 2008), including an evolving body of literature on
    recurrent gesture and gesture families (e.g. Fricke et al. 2014; Müller 2017), it is still the lexical affiliate model that is most ap-
    parent in formal linguistic models of multimodal meaning(e.g.
    Alahverdzhieva et al. 2017; Lascarides and Stone 2009; Puste-
    jovsky and Krishnaswamy 2021; Schlenker 2020). In this work,
    I argue that the lexical affiliate should be carefully reconsidered
    in the further development of such models.
    In place of the lexical affiliate, I suggest a further shift
    toward a frame-based, action schematic approach to gestural
    meaning in line with that proposed in, for example, Parrill and
    Sweetser (2004) and Müller (2017). To demonstrate the utility
    of this approach I present three types of compositional gesture
    sequences which I call spatial contrast, spatial embedding, and
    cooperative abstract deixis. All three rely on gestural context,
    rather than gesture-speech alignment, to convey interactive (i.e.
    pragmatic) meaning. The centrality of gestural context to ges-
    ture meaning in these examples demonstrates the necessity of
    developing a model of gestural meaning independent of its in-
    tegration with speech.
  • Lee, C., Jessop, A., Bidgood, A., Peter, M. S., Pine, J. M., Rowland, C. F., & Durrant, S. (2023). How executive functioning, sentence processing, and vocabulary are related at 3 years of age. Journal of Experimental Child Psychology, 233: 105693. doi:10.1016/j.jecp.2023.105693.

    Abstract

    There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child’s processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input.

    Additional information

    table S1 code and data
  • Lehecka, T. (2023). Normative ratings for 111 Swedish nouns and corresponding picture stimuli. Nordic Journal of Linguistics, 46(1), 20-45. doi:10.1017/S0332586521000123.

    Abstract

    Normative ratings are a means to control for the effects of confounding variables in psycholinguistic experiments. This paper introduces a new dataset of normative ratings for Swedish encompassing 111 concrete nouns and the corresponding picture stimuli in the MultiPic database (Duñabeitia et al. 2017). The norms for name agreement, category typicality, age of acquisition and subjective frequency were collected using online surveys among native speakers of the Finland-Swedish variety of Swedish. The paper discusses the inter-correlations between these variables and compares them against available ratings for other languages. In doing so, the paper argues that ratings for age of acquisition and subjective frequency collected for other languages may be applied to psycholinguistic studies on Finland-Swedish, at least with respect to concrete and highly imageable nouns. In contrast, norms for name agreement should be collected from speakers of the same language variety as represented by the subjects in the actual experiments.
  • Lei, A., Willems, R. M., & Eekhof, L. S. (2023). Emotions, fast and slow: Processing of emotion words is affected by individual differences in need for affect and narrative absorption. Cognition and Emotion, 37(5), 997-1005. doi:10.1080/02699931.2023.2216445.

    Abstract

    Emotional words have consistently been shown to be processed differently than neutral words. However, few studies have examined individual variability in emotion word processing with longer, ecologically valid stimuli (beyond isolated words, sentences, or paragraphs). In the current study, we re-analysed eye-tracking data collected during story reading to reveal how individual differences in need for affect and narrative absorption impact the speed of emotion word reading. Word emotionality was indexed by affective-aesthetic potentials (AAP) calculated by a sentiment analysis tool. We found that individuals with higher levels of need for affect and narrative absorption read positive words more slowly. On the other hand, these individual differences did not influence the reading time of more negative words, suggesting that high need for affect and narrative absorption are characterised by a positivity bias only. In general, unlike most previous studies using more isolated emotion word stimuli, we observed a quadratic (U-shaped) effect of word emotionality on reading speed, such that both positive and negative words were processed more slowly than neutral words. Taken together, this study emphasises the importance of taking into account individual differences and task context when studying emotion word processing.
  • Lemaitre, H., Le Guen, Y., Tilot, A. K., Stein, J. L., Philippe, C., Mangin, J.-F., Fisher, S. E., & Frouin, V. (2023). Genetic variations within human gained enhancer elements affect human brain sulcal morphology. NeuroImage, 265: 119773. doi:10.1016/j.neuroimage.2022.119773.

    Abstract

    The expansion of the cerebral cortex is one of the most distinctive changes in the evolution of the human brain. Cortical expansion and related increases in cortical folding may have contributed to emergence of our capacities for high-order cognitive abilities. Molecular analysis of humans, archaic hominins, and non-human primates has allowed identification of chromosomal regions showing evolutionary changes at different points of our phylogenetic history. In this study, we assessed the contributions of genomic annotations spanning 30 million years to human sulcal morphology measured via MRI in more than 18,000 participants from the UK Biobank. We found that variation within brain-expressed human gained enhancers, regulatory genetic elements that emerged since our last common ancestor with Old World monkeys, explained more trait heritability than expected for the left and right calloso-marginal posterior fissures and the right central sulcus. Intriguingly, these are sulci that have been previously linked to the evolution of locomotion in primates and later on bipedalism in our hominin ancestors.

    Additional information

    tables
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N. (2023). Communicative efficiency: Language structure and use. Cambridge: Cambridge University Press.

    Abstract

    All living beings try to save effort, and humans are no exception. This groundbreaking book shows how we save time and energy during communication by unconsciously making efficient choices in grammar, lexicon and phonology. It presents a new theory of 'communicative efficiency', the idea that language is designed to be as efficient as possible, as a system of communication. The new framework accounts for the diverse manifestations of communicative efficiency across a typologically broad range of languages, using various corpus-based and statistical approaches to explain speakers' bias towards efficiency. The author's unique interdisciplinary expertise allows her to provide rich evidence from a broad range of language sciences. She integrates diverse insights from over a hundred years of research into this comprehensible new theory, which she presents step-by-step in clear and accessible language. It is essential reading for language scientists, cognitive scientists and anyone interested in language use and communication.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Levshina, N. (2023). Testing communicative and learning biases in a causal model of language evolution:A study of cues to Subject and Object. In M. Degano, T. Roberts, G. Sbardolini, & M. Schouwstra (Eds.), The Proceedings of the 23rd Amsterdam Colloquium (pp. 383-387). Amsterdam: University of Amsterdam.
  • Levshina, N. (2023). Word classes in corpus linguistics. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 833-850). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198852889.013.34.

    Abstract

    Word classes play a central role in corpus linguistics under the name of parts of speech (POS). Many popular corpora are provided with POS tags. This chapter gives examples of popular tagsets and discusses the methods of automatic tagging. It also considers bottom-up approaches to POS induction, which are particularly important for the ‘poverty of stimulus’ debate in language acquisition research. The choice of optimal POS tagging involves many difficult decisions, which are related to the level of granularity, redundancy at different levels of corpus annotation, cross-linguistic applicability, language-specific descriptive adequacy, and dealing with fuzzy boundaries between POS. The chapter also discusses the problem of flexible word classes and demonstrates how corpus data with POS tags and syntactic dependencies can be used to quantify the level of flexibility in a language.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators. In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.

    Abstract

    Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as 'open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). The timing bottleneck: Why timing and overlap are mission-critical for conversational user interfaces, speech recognition and dialogue systems. In Proceedings of the 24rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023). doi:10.18653/v1/2023.sigdial-1.45.

    Abstract

    Speech recognition systems are a key intermediary in voice-driven human-computer interaction. Although speech recognition works well for pristine monologic audio, real-life use cases in open-ended interactive settings still present many challenges. We argue that timing is mission-critical for dialogue systems, and evaluate 5 major commercial ASR systems for their conversational and multilingual support. We find that word error rates for natural conversational data in 6 languages remain abysmal, and that overlap remains a key challenge (study 1). This impacts especially the recognition of conversational words (study 2), and in turn has dire consequences for downstream intent recognition (study 3). Our findings help to evaluate the current state of conversational ASR, contribute towards multidimensional error analysis and evaluation, and identify phenomena that need most attention on the way to build robust interactive speech technologies.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Nabrotzky, J., Ambrazaitis, G., Zellers, M., & House, D. (2023). Temporal alignment of manual gestures’ phase transitions with lexical and post-lexical accentual F0 peaks in spontaneous Swedish interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527194.

    Abstract

    Many studies investigating the temporal alignment of co-speech
    gestures to acoustic units in the speech signal find a close
    coupling of the gestural landmarks and pitch accents or the
    stressed syllable of pitch-accented words. In English, a pitch
    accent is anchored in the lexically stressed syllable. Hence, it is
    unclear whether it is the lexical phonological dimension of
    stress, or the phrase-level prominence that determines the
    details of speech-gesture synchronization. This paper explores
    the relation between gestural phase transitions and accentual F0
    peaks in Stockholm Swedish, which exhibits a lexical pitch
    accent distinction. When produced with phrase-level
    prominence, there are three different configurations of
    lexicality of F0 peaks and the status of the syllable it is aligned
    with. Through analyzing the alignment of the different F0 peaks
    with gestural onsets in spontaneous dyadic conversations, we
    aim to contribute to our understanding of the role of lexical
    prosodic phonology in the co-production of speech and gesture.
    The results, though limited by a small dataset, still suggest
    differences between the three types of peaks concerning which
    types of gesture phase onsets they tend to align with, and how
    well these landmarks align with each other, although these
    differences did not reach significance.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nota, N. (2023). Talking faces: The contribution of conversational facial signals to language use and processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Offrede, T., Mishra, C., Skantze, G., Fuchs, S., & Mooshammer, C. (2023). Do Humans Converge Phonetically When Talking to a Robot? In R. Skarnitzl, & J. Volin (Eds.), Proceedings of the 20th International Congress of Phonetic Sciences (pp. 3507-3511). Prague: GUARANT International.

    Abstract

    Phonetic convergence—i.e., adapting one’s speech
    towards that of an interlocutor—has been shown
    to occur in human-human conversations as well as
    human-machine interactions. Here, we investigate
    the hypothesis that human-to-robot convergence is
    influenced by the human’s perception of the robot
    and by the conversation’s topic. We conducted a
    within-subjects experiment in which 33 participants
    interacted with two robots differing in their eye gaze
    behavior—one looked constantly at the participant;
    the other produced gaze aversions, similarly to a
    human’s behavior. Additionally, the robot asked
    questions with increasing intimacy levels.
    We observed that the speakers tended to converge
    on F0 to the robots. However, this convergence
    to the robots was not modulated by how the
    speakers perceived them or by the topic’s intimacy.
    Interestingly, speakers produced lower F0 means
    when talking about more intimate topics. We
    discuss these findings in terms of current theories of
    conversational convergence.
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables
  • Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.

    Abstract

    Background

    Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
    Methods

    Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
    Results

    The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
    Conclusions

    Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
  • Pereira Soares, S. M., Chaouch-Orozco, A., & González Alonso, J. (2023). Innovations and challenges in acquisition and processing methodologies for L3/Ln. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 661-682). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.026.

    Abstract

    The advent of psycholinguistic and neurolinguistic methodologies has provided new insights into theories of language acquisition. Sequential multilingualism is no exception, and some of the most recent work on the subject has incorporated a particular focus on language processing. This chapter surveys some of the work on the processing of lexical and morphosyntactic aspects of third or further languages, with different offline and online methodologies. We also discuss how, while increasingly sophisticated techniques and experimental designs have improved our understanding of third language acquisition and processing, simpler but clever designs can answer pressing questions in our theoretical debate. We provide examples of both sophistication and clever simplicity in experimental design, and argue that the field would benefit from incorporating a combination of both concepts into future work.
  • Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.

    Abstract

    Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.

    Additional information

    data
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.

    Abstract

    Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits.
  • Raghavan, R., Raviv, L., & Peeters, D. (2023). What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition, 240: 105581. doi:10.1016/j.cognition.2023.105581.

    Abstract

    Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
  • Raimondi, T., Di Panfilo, G., Pasquali, M., Zarantonello, M., Favaro, L., Savini, T., Gamba, M., & Ravignani, A. (2023). Isochrony and rhythmic interaction in ape duetting. Proceedings of the Royal Society B: Biological Sciences, 290: 20222244. doi:10.1098/rspb.2022.2244.

    Abstract

    How did rhythm originate in humans, and other species? One cross-cultural universal, frequently found in human music, is isochrony: when note onsets repeat regularly like the ticking of a clock. Another universal consists in synchrony (e.g. when individuals coordinate their notes so that they are sung at the same time). An approach to biomusicology focuses on similarities and differences across species, trying to build phylogenies of musical traits. Here we test for the presence of, and a link between, isochrony and synchrony in a non-human animal. We focus on the songs of one of the few singing primates, the lar gibbon (Hylobates lar), extracting temporal features from their solo songs and duets. We show that another ape exhibits one rhythmic feature at the core of human musicality: isochrony. We show that an enhanced call rate overall boosts isochrony, suggesting that respiratory physiological constraints play a role in determining the song's rhythmic structure. However, call rate alone cannot explain the flexible isochrony we witness. Isochrony is plastic and modulated depending on the context of emission: gibbons are more isochronous when duetting than singing solo. We present evidence for rhythmic interaction: we find statistical causality between one individual's note onsets and the co-singer's onsets, and a higher than chance degree of synchrony in the duets. Finally, we find a sex-specific trade-off between individual isochrony and synchrony. Gibbon's plasticity for isochrony and rhythmic overlap may suggest a potential shared selective pressure for interactive vocal displays in singing primates. This pressure may have convergently shaped human and gibbon musicality while acting on a common neural primate substrate. Beyond humans, singing primates are promising models to understand how music and, specifically, a sense of rhythm originated in the primate phylogeny.
  • Rasenberg, M. (2023). Mutual understanding from a multimodal and interactional perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rasenberg, M., Amha, A., Coler, M., van Koppen, M., van Miltenburg, E., de Rijk, L., Stommel, W., & Dingemanse, M. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans. Linguistics in the Netherlands, 40, 309-317. doi:10.1075/avt.00095.ras.

    Abstract

    What is language and who or what can be said to have it? In this essay we consider this question in the context of interactions with non-humans, specifically: animals and computers. While perhaps an odd pairing at first glance, here we argue that these domains can offer contrasting perspectives through which we can explore and reimagine language. The interactions between humans and animals, as well as between humans and computers, reveal both the essence and the boundaries of language: from examining the role of sequence and contingency in human-animal interaction, to unravelling the challenges of natural interactions with “smart” speakers and language models. By bringing together disparate fields around foundational questions, we push the boundaries of linguistic inquiry and uncover new insights into what language is and how it functions in diverse non-humanexclusive contexts.
  • Ravignani, A., & Herbst, C. T. (2023). Voices in the ocean: Toothed whales evolved a third way of making sounds similar to that of land mammals and birds. Science, 379(6635), 881-882. doi:10.1126/science.adg5256.
  • Raviv, L., & Kirby, S. (2023). Self domestication and the cultural evolution of language. In J. J. Tehrani, J. Kendal, & R. Kendal (Eds.), The Oxford Handbook of Cultural Evolution. Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198869252.013.60.

    Abstract

    The structural design features of human language emerge in the process of cultural evolution, shaping languages over the course of communication, learning, and transmission. What role does this leave biological evolution? This chapter highlights the biological bases and preconditions that underlie the particular type of prosocial behaviours and cognitive inference abilities that are required for languages to emerge via cultural evolution to begin with.
  • Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2023). Elephants as an animal model for self-domestication. Proceedings of the National Academy of Sciences of the United States of America, 120(15): e2208607120. doi:10.1073/pnas.2208607120.

    Abstract

    Humans are unique in their sophisticated culture and societal structures, their complex languages, and their extensive tool use. According to the human self-domestication hypothesis, this unique set of traits may be the result of an evolutionary process of self-induced domestication, in which humans evolved to be less aggressive and more cooperative. However, the only other species that has been argued to be self-domesticated besides humans so far is bonobos, resulting in a narrow scope for investigating this theory limited to the primate order. Here, we propose an animal model for studying self-domestication: the elephant. First, we support our hypothesis with an extensive cross-species comparison, which suggests that elephants indeed exhibit many of the features associated with self-domestication (e.g., reduced aggression, increased prosociality, extended juvenile period, increased playfulness, socially regulated cortisol levels, and complex vocal behavior). Next, we present genetic evidence to reinforce our proposal, showing that genes positively selected in elephants are enriched in pathways associated with domestication traits and include several candidate genes previously associated with domestication. We also discuss several explanations for what may have triggered a self-domestication process in the elephant lineage. Our findings support the idea that elephants, like humans and bonobos, may be self-domesticated. Since the most recent common ancestor of humans and elephants is likely the most recent common ancestor of all placental mammals, our findings have important implications for convergent evolution beyond the primate taxa, and constitute an important advance toward understanding how and why self-domestication shaped humans’ unique cultural niche.

    Additional information

    supporting information
  • Roe, J. M., Vidal-Piñeiro, D., Amlien, I. K., Pan, M., Sneve, M. H., Thiebaut de Schotten, M., Friedrich, P., Sha, Z., Francks, C., Eilertsen, E. M., Wang, Y., Walhovd, K. B., Fjell, A. M., & Westerhausen, R. (2023). Tracing the development and lifespan change of population-level structural asymmetry in the cerebral cortex. eLife, 12: e84685. doi:10.7554/eLife.84685.

    Abstract

    Cortical asymmetry is a ubiquitous feature of brain organization that is altered in neurodevelopmental disorders and aging. Achieving consensus on cortical asymmetries in humans is necessary to uncover the genetic-developmental mechanisms that shape them and factors moderating cortical lateralization. Here, we delineate population-level asymmetry in cortical thickness and surface area vertex-wise in 7 datasets and chart asymmetry trajectories across life (4-89 years; observations = 3937; 70% longitudinal). We reveal asymmetry interrelationships, heritability, and test associations in UK Biobank (N=∼37,500). Cortical asymmetry was robust across datasets. Whereas areal asymmetry is predominantly stable across life, thickness asymmetry grows in development and declines in aging. Areal asymmetry correlates in specific regions, whereas thickness asymmetry is globally interrelated across cortex and suggests high directional variability in global thickness lateralization. Areal asymmetry is moderately heritable (max h2SNP ∼19%), and phenotypic correlations are reflected by high genetic correlations, whereas heritability of thickness asymmetry is low. Finally, we detected an asymmetry association with cognition and confirm recently-reported handedness links. Results suggest areal asymmetry is developmentally stable and arises in early life, whereas developmental changes in thickness asymmetry may lead to directional variability of global thickness lateralization. Our results bear enough reproducibility to serve as a standard for future brain asymmetry studies.

    Additional information

    link to preprint supplementary files
  • Rossi, E., Pereira Soares, S. M., Prystauka, Y., Nakamura, M., & Rothman, J. (2023). Riding the (brain) waves! Using neural oscillations to inform bilingualism research. Bilingualism: Language and Cognition, 26(1), 202-215. doi:10.1017/S1366728922000451.

    Abstract

    The study of the brains’ oscillatory activity has been a standard technique to gain insights into human neurocognition for a relatively long time. However, as a complementary analysis to ERPs, only very recently has it been utilized to study bilingualism and its neural underpinnings. Here, we provide a theoretical and methodological starter for scientists in the (psycho)linguistics and neurocognition of bilingualism field(s) to understand the bases and applications of this analytical tool. Towards this goal, we provide a description of the characteristics of the human neural (and its oscillatory) signal, followed by an in-depth description of various types of EEG oscillatory analyses, supplemented by figures and relevant examples. We then utilize the scant, yet emergent, literature on neural oscillations and bilingualism to highlight the potential of how analyzing neural oscillations can advance our understanding of the (psycho)linguistic and neurocognitive understanding of bilingualism.
  • Rossi, G., Dingemanse, M., Floyd, S., Baranova, J., Blythe, J., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2023). Shared cross-cultural principles underlie human prosocial behavior at the smallest scale. Scientific Reports, 13: 6057. doi:10.1038/s41598-023-30580-5.

    Abstract

    Prosociality and cooperation are key to what makes us human. But different cultural norms can shape our evolved capacities for interaction, leading to differences in social relations. How people share resources has been found to vary across cultures, particularly when stakes are high and when interactions are anonymous. Here we examine prosocial behavior among familiars (both kin and non-kin) in eight cultures on five continents, using video recordings of spontaneous requests for immediate, low-cost assistance (e.g., to pass a utensil). We find that, at the smallest scale of human interaction, prosocial behavior follows cross-culturally shared principles: requests for assistance are very frequent and mostly successful; and when people decline to give help, they normally give a reason. Although there are differences in the rates at which such requests are ignored, or require verbal acceptance, cultural variation is limited, pointing to a common foundation for everyday cooperation around the world.
  • Rutz, C., Bronstein, M., Raskin, A., Vernes, S. C., Zacarian, K., & Blasi, D. E. (2023). Using machine learning to decode animal communication. Science, 381(6654), 152-155. doi:10.1126/science.adg7314.

    Abstract

    The past few years have seen a surge of interest in using machine learning (ML) methods for studying the behavior of nonhuman animals (hereafter “animals”) (1). A topic that has attracted particular attention is the decoding of animal communication systems using deep learning and other approaches (2). Now is the time to tackle challenges concerning data availability, model validation, and research ethics, and to embrace opportunities for building collaborations across disciplines and initiatives.
  • Ryskin, R., & Nieuwland, M. S. (2023). Prediction during language comprehension: What is next? Trends in Cognitive Sciences, 27(11), 1032-1052. doi:10.1016/j.tics.2023.08.003.

    Abstract

    Prediction is often regarded as an integral aspect of incremental language comprehension, but little is known about the cognitive architectures and mechanisms that support it. We review studies showing that listeners and readers use all manner of contextual information to generate multifaceted predictions about upcoming input. The nature of these predictions may vary between individuals owing to differences in language experience, among other factors. We then turn to unresolved questions which may guide the search for the underlying mechanisms. (i) Is prediction essential to language processing or an optional strategy? (ii) Are predictions generated from within the language system or by domain-general processes? (iii) What is the relationship between prediction and memory? (iv) Does prediction in comprehension require simulation via the production system? We discuss promising directions for making progress in answering these questions and for developing a mechanistic understanding of prediction in language.
  • Sajovic, J., Meglič, A., Corradi, Z., Khan, M., Maver, A., Vidmar, M. J., Hawlina, M., Cremers, F. P. M., & Fakin, A. (2023). ABCA4Variant c.5714+5G> A in trans with null alleles results in primary RPE damage. Investigative Opthalmology & Visual Science, 64(12): 33. doi:10.1167/iovs.64.12.33.

    Abstract

    Purpose: To determine the disease pathogenesis associated with the frequent ABCA4 variant c.5714+5G>A (p.[=,Glu1863Leufs*33]).

    Methods: Patient-derived photoreceptor precursor cells were generated to analyze the effect of c.5714+5G>A on splicing and perform a quantitative analysis of c.5714+5G>A products. Patients with c.5714+5G>A in trans with a null allele (i.e., c.5714+5G>A patients; n = 7) were compared with patients with two null alleles (i.e., double null patients; n = 11); with a special attention to the degree of RPE atrophy (area of definitely decreased autofluorescence and the degree of photoreceptor impairment (outer nuclear layer thickness and pattern electroretinography amplitude).

    Results: RT-PCR of mRNA from patient-derived photoreceptor precursor cells showed exon 40 and exon 39/40 deletion products, as well as the normal transcript. Quantification of products showed 52.4% normal and 47.6% mutant ABCA4 mRNA. Clinically, c.5714+5G>A patients displayed significantly better structural and functional preservation of photoreceptors (thicker outer nuclear layer, presence of tubulations, higher pattern electroretinography amplitude) than double null patients with similar degrees of RPE loss, whereas double null patients exhibited signs of extensive photoreceptor ,damage even in the areas with preserved RPE.

    Conclusions: The prototypical STGD1 sequence of events of primary RPE and secondary photoreceptor damage is congruous with c.5714+5G>A, but not the double null genotype, which implies different and genotype-dependent disease mechanisms. We hypothesize that the relative photoreceptor sparing in c.5714+5G>A patients results from the remaining function of the ABCA4 transporter originating from the normally spliced product, possibly by decreasing the direct bisretinoid toxicity on photoreceptor membranes.
  • Sander, J., Lieberman, A., & Rowland, C. F. (2023). Exploring joint attention in American Sign Language: The influence of sign familiarity. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (Eds.), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 632-638).

    Abstract

    Children’s ability to share attention with another social partner (i.e., joint attention) has been found to support language development. Despite the large amount of research examining the effects of joint attention on language in hearing population, little is known about how deaf children learning sign languages achieve joint attention with their caregivers during natural social interaction and how caregivers provide and scaffold learning opportunities for their children. The present study investigates the properties and timing of joint attention surrounding familiar and novel naming events and their relationship to children’s vocabulary. Naturalistic play sessions of caretaker-child-dyads using American Sign Language were analyzed in regards to naming events of either familiar or novel object labeling events and the surrounding joint attention events. We observed that most naming events took place in the context of a successful joint attention event and that sign familiarity was related to the timing of naming events within the joint attention events. Our results suggest that caregivers are highly sensitive to their child’s visual attention in interactions and modulate joint attention differently in the context of naming events of familiar vs. novel object labels.
  • Scheibel, M., & Indefrey, P. (2023). Top-down enhanced object recognition in blocking and priming paradigms. Journal of Experimental Psychology: Human Perception and Performance, 49(3), 327-354. doi:10.1037/xhp0001094.

    Abstract

    Previous studies have demonstrated that context manipulations by semantic blocking and category priming can, under particular design conditions, give rise to semantic facilitation effects. The interpretation of semantic facilitation effects is controversial in the word production literature; perceptual accounts propose that contextually facilitated object recognition may underlie facilitation effects. The present study tested this notion. We investigated the difficulty of object recognition in a semantic blocking and a category priming task. We presented all pictures in gradually de-blurring image sequences and measured the de-blurring level that first allowed for correct object naming as an indicator of the perceptual demands of object recognition. Based on object recognition models assuming a temporal progression from coarse- to fine-grained visual processing, we reasoned that the lower the required level of detail, the more efficient the recognition processes. The results demonstrate that categorically related contexts reduce the level of visual detail required for object naming compared to unrelated contexts, with this effect being most pronounced for shape-distinctive objects and in contexts providing explicit category cues. We propose a top-down explanation based on target predictability of the observed effects. Implications of the recognition effects based on target predictability for the interpretation of context effects observed in latencies are discussed.

    Additional information

    Stimuli, Ratings, Analysis codes
  • Scherz, M. D., Schmidt, R., Brown, J. L., Glos, J., Lattenkamp, E. Z., Rakotomalala, Z., Rakotoarison, A., Rakotonindrina, R. T., Randriamalala, O., Raselimanana, A. P., Rasolonjatovo, S. M., Ratsoavina, F. M., Razafindraibe, J. H., Glaw, F., & Vences, M. (2023). Repeated divergence of amphibians and reptiles across an elevational gradient in northern Madagascar. Ecology and Evolution, 13(3): e9914. doi:10.1002/ece3.9914.

    Abstract

    How environmental factors shape patterns of biotic diversity in tropical ecosystems is an active field of research, but studies examining the possibility of ecological speciation in terrestrial tropical ecosystems are scarce. We use the isolated rainforest herpetofauna on the Montagne d'Ambre (Amber Mountain) massif in northern Madagascar as a model to explore elevational divergence at the level of populations and communities. Based on intensive sampling and DNA barcoding of amphibians and reptiles along a transect ranging from ca. 470–1470 m above sea level (a.s.l.), we assessed a main peak in species richness at an elevation of ca. 1000 m a.s.l. with 41 species. The proportion of local endemics was highest (about 1/3) at elevations >1100 m a.s.l. Two species of chameleons (Brookesia tuberculata, Calumma linotum) and two species of frogs (Mantidactylus bellyi, M. ambony) studied in depth by newly developed microsatellite markers showed genetic divergence up the slope of the mountain, some quite strong, others very weak, but in each case with genetic breaks between 1100 and 1270 m a.s.l. Genetic clusters were found in transect sections significantly differing in bioclimate and herpetological community composition. A decrease in body size was detected in several species with increasing elevation. The studied rainforest amphibians and reptiles show concordant population genetic differentiation across elevation along with morphological and niche differentiation. Whether this parapatric or microallopatric differentiation will suffice for the completion of speciation is, however, unclear, and available phylogeographic evidence rather suggests that a complex interplay between ecological and allopatric divergence processes is involved in generating the extraordinary species diversity of Madagascar's biota. Our study reveals concordant patterns of diversification among main elevational bands, but suggests that these adaptational processes are only part of the complex of processes leading to species formation, among which geographical isolation is probably also important.

    Additional information

    supplementary materials
  • Schijven, D., Postema, M., Fukunaga, M., Matsumoto, J., Miura, K., De Zwarte, S. M., Van Haren, N. E. M., Cahn, W., Hulshoff Pol, H. E., Kahn, R. S., Ayesa-Arriola, R., Ortiz-García de la Foz, V., Tordesillas-Gutierrez, D., Vázquez-Bourgon, J., Crespo-Facorro, B., Alnæs, D., Dahl, A., Westlye, L. T., Agartz, I., Andreassen, O. A. and 129 moreSchijven, D., Postema, M., Fukunaga, M., Matsumoto, J., Miura, K., De Zwarte, S. M., Van Haren, N. E. M., Cahn, W., Hulshoff Pol, H. E., Kahn, R. S., Ayesa-Arriola, R., Ortiz-García de la Foz, V., Tordesillas-Gutierrez, D., Vázquez-Bourgon, J., Crespo-Facorro, B., Alnæs, D., Dahl, A., Westlye, L. T., Agartz, I., Andreassen, O. A., Jönsson, E. G., Kochunov, P., Bruggemann, J. M., Catts, S. V., Michie, P. T., Mowry, B. J., Quidé, Y., Rasser, P. E., Schall, U., Scott, R. J., Carr, V. J., Green, M. J., Henskens, F. A., Loughland, C. M., Pantelis, C., Weickert, C. S., Weickert, T. W., De Haan, L., Brosch, K., Pfarr, J.-K., Ringwald, K. G., Stein, F., Jansen, A., Kircher, T. T., Nenadić, I., Krämer, B., Gruber, O., Satterthwaite, T. D., Bustillo, J., Mathalon, D. H., Preda, A., Calhoun, V. D., Ford, J. M., Potkin, S. G., Chen, J., Tan, Y., Wang, Z., Xiang, H., Fan, F., Bernardoni, F., Ehrlich, S., Fuentes-Claramonte, P., Garcia-Leon, M. A., Guerrero-Pedraza, A., Salvador, R., Sarró, S., Pomarol-Clotet, E., Ciullo, V., Piras, F., Vecchio, D., Banaj, N., Spalletta, G., Michielse, S., Van Amelsvoort, T., Dickie, E. W., Voineskos, A. N., Sim, K., Ciufolini, S., Dazzan, P., Murray, R. M., Kim, W.-S., Chung, Y.-C., Andreou, C., Schmidt, A., Borgwardt, S., McIntosh, A. M., Whalley, H. C., Lawrie, S. M., Du Plessis, S., Luckhoff, H. K., Scheffler, F., Emsley, R., Grotegerd, D., Lencer, R., Dannlowski, U., Edmond, J. T., Rootes-Murdy, K., Stephen, J. M., Mayer, A. R., Antonucci, L. A., Fazio, L., Pergola, G., Bertolino, A., Díaz-Caneja, C. M., Janssen, J., Lois, N. G., Arango, C., Tomyshev, A. S., Lebedeva, I., Cervenka, S., Sellgren, C. M., Georgiadis, F., Kirschner, M., Kaiser, S., Hajek, T., Skoch, A., Spaniel, F., Kim, M., Kwak, Y. B., Oh, S., Kwon, J. S., James, A., Bakker, G., Knöchel, C., Stäblein, M., Oertel, V., Uhlmann, A., Howells, F. M., Stein, D. J., Temmingh, H. S., Diaz-Zuluaga, A. M., Pineda-Zapata, J. A., López-Jaramillo, C., Homan, S., Ji, E., Surbeck, W., Homan, P., Fisher, S. E., Franke, B., Glahn, D. C., Gur, R. C., Hashimoto, R., Jahanshad, N., Luders, E., Medland, S. E., Thompson, P. M., Turner, J. A., Van Erp, T. G., & Francks, C. (2023). Large-scale analysis of structural brain asymmetries in schizophrenia via the ENIGMA consortium. Proceedings of the National Academy of Sciences of the United States of America, 120(14): e2213880120. doi:10.1073/pnas.2213880120.

    Abstract

    Left–right asymmetry is an important organizing feature of the healthy brain that may be altered in schizophrenia, but most studies have used relatively small samples and heterogeneous approaches, resulting in equivocal findings. We carried out the largest case–control study of structural brain asymmetries in schizophrenia, with MRI data from 5,080 affected individuals and 6,015 controls across 46 datasets, using a single image analysis protocol. Asymmetry indexes were calculated for global and regional cortical thickness, surface area, and subcortical volume measures. Differences of asymmetry were calculated between affected individuals and controls per dataset, and effect sizes were meta-analyzed across datasets. Small average case–control differences were observed for thickness asymmetries of the rostral anterior cingulate and the middle temporal gyrus, both driven by thinner left-hemispheric cortices in schizophrenia. Analyses of these asymmetries with respect to the use of antipsychotic medication and other clinical variables did not show any significant associations. Assessment of age- and sex-specific effects revealed a stronger average leftward asymmetry of pallidum volume between older cases and controls. Case–control differences in a multivariate context were assessed in a subset of the data (N = 2,029), which revealed that 7% of the variance across all structural asymmetries was explained by case–control status. Subtle case–control differences of brain macrostructural asymmetry may reflect differences at the molecular, cytoarchitectonic, or circuit levels that have functional relevance for the disorder. Reduced left middle temporal cortical thickness is consistent with altered left-hemisphere language network organization in schizophrenia.

    Additional information

    Supporting Information link to preprint
  • Seijdel, N., Marshall, T. R., & Drijvers, L. (2023). Rapid invisible frequency tagging (RIFT): A promising technique to study neural and cognitive processing using naturalistic paradigms. Cerebral Cortex, 33(5), 1626-1629. doi:10.1093/cercor/bhac160.

    Abstract

    Frequency tagging has been successfully used to investigate selective stimulus processing in electroencephalography (EEG) or magnetoencephalography (MEG) studies. Recently, new projectors have been developed that allow for frequency tagging at higher frequencies (>60 Hz). This technique, rapid invisible frequency tagging (RIFT), provides two crucial advantages over low-frequency tagging as (i) it leaves low-frequency oscillations unperturbed, and thus open for investigation, and ii) it can render the tagging invisible, resulting in more naturalistic paradigms and a lack of participant awareness. The development of this technique has far-reaching implications as oscillations involved in cognitive processes can be investigated, and potentially manipulated, in a more naturalistic manner.
  • Sekine, K., & Kajikawa, T. (2023). Does the spatial distribution of a speaker's gaze and gesture impact on a listener's comprehension of discourse? In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527208.

    Abstract

    This study investigated the impact of a speaker's gaze direction
    on a listener's comprehension of discourse. Previous research
    suggests that hand gestures play a role in referent allocation,
    enabling listeners to better understand the discourse. The
    current study aims to determine whether the speaker's gaze
    direction has a similar effect on reference resolution as co-
    speech gestures. Thirty native Japanese speakers participated in
    the study and were assigned to one of three conditions:
    congruent, incongruent, or speech-only. Participants watched
    36 videos of an actor narrating a story consisting of three
    sentences with two protagonists. The speaker consistently
    used hand gestures to allocate one protagonist to the lower right
    and the other to the lower left space, while directing her gaze to
    either space of the target person (congruent), the other person
    (incongruent), or no particular space (speech-only). Participants
    were required to verbally answer a question about the target
    protagonist involved in an accidental event as quickly as
    possible. Results indicate that participants in the congruent
    condition exhibited faster reaction times than those in the
    incongruent condition, although the difference was not
    significant. These findings suggest that the speaker's gaze
    direction is not enough to facilitate a listener's comprehension
    of discourse.
  • Senft, G. (2023). The system of classifiers in Kilivila - The role of these formatives and their functions. In M. Allassonnière-Tang, & M. Kilarski (Eds.), Nominal Classification in Asia and Oceania. Functional and diachronic perspectives (pp. 10-29). Amsterdam: John Benjamins. doi:10.1075/cilt.362.02sen.

    Abstract

    This paper presents the complex system of classifiers in Kilivila, the language of the Trobriand Islanders of Papua New Guinea. After a brief introduction to the language and its speakers, the classifier system is briefly described with respect to the role of these formatives for the word formation of Kilivila numerals, adjectives, demonstratives and one form of an interrogative pronoun/adverb. Then the functions the classifier system fulfils with respect to concord, temporary classification, the unitizing of nominal expressions, nominalization, indication of plural, anaphoric reference as well as text and discourse coherence are discussed and illustrated. The paper ends with some language specific and cross-linguistic questions for further research.
  • Seuren, P. A. M. (2023). A refutation of positivism in philosophy of mind: Thinking, reality, and language. London: Routledge.

    Abstract

    This book argues that positivism, though now the dominant paradigm for both the natural and the human sciences, is intrinsically unfit for the latter. In particular, it is unfit for linguistics and cognitive science, where it is ultimately self-destructive, since it fails to account for causality, while the mind, the primary object of research of the human sciences, cannot be understood unless considered to be an autonomous causal force. 

    Author Pieter Albertus Maria Seuren, who died shortly after this manuscript was finished and after a remarkable career, reviews the history of this issue since the seventeenth century. He focuses on Descartes, Leibniz, British Empiricism and Kant, arguing that neither cognition nor language can be adequately accounted for unless the mind is given its full due. This implies that a distinction must be made—following Alexius Meinong, but against Russell and Quine—between actual and virtual reality. The latter is a product of the causally active mind and a necessary ingredient for the setting up of mental models, without which neither cognition nor language can function. Mental models are coherent sets of propositions, and can be wholly or partially true or false. Positivism rules out mental models, blocking any serious semantics and thereby reducing both language and cognition to caricatures of themselves. Seuren presents a causal theory of meaning, linking up language with cognition and solving the old question of what meaning actually amounts to.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2023). Syllable rate drives rate normalization, but is not the only factor. In R. Skarnitzl, & J. Volín (Eds.), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 56-60). Prague: Guarant International.

    Abstract

    Speech is perceived relative to the speech rate in the context. It is unclear, however, what information listeners use to compute speech rate. The present study examines whether listeners use the number of
    syllables per unit time (i.e., syllable rate) as a measure of speech rate, as indexed by subsequent vowel perception. We ran two rate-normalization experiments in which participants heard duration-matched word lists that contained either monosyllabic
    vs. bisyllabic words (Experiment 1), or monosyllabic vs. trisyllabic pseudowords (Experiment 2). The participants’ task was to categorize an /ɑ-aː/ continuum that followed the word lists. The monosyllabic condition was perceived as slower (i.e., fewer /aː/ responses) than the bisyllabic and
    trisyllabic condition. However, no difference was observed between bisyllabic and trisyllabic contexts. Therefore, while syllable rate is used in perceiving speech rate, other factors, such as fast speech processes, mean F0, and intensity, must also influence rate normalization.
  • Severijnen, G. G. A., Di Dona, G., Bosker, H. R., & McQueen, J. M. (2023). Tracking talker-specific cues to lexical stress: Evidence from perceptual learning. Journal of Experimental Psychology: Human Perception and Performance, 49(4), 549-565. doi:10.1037/xhp0001105.

    Abstract

    When recognizing spoken words, listeners are confronted by variability in the speech signal caused by talker differences. Previous research has focused on segmental talker variability; less is known about how suprasegmental variability is handled. Here we investigated the use of perceptual learning to deal with between-talker differences in lexical stress. Two groups of participants heard Dutch minimal stress pairs (e.g., VOORnaam vs. voorNAAM, “first name” vs. “respectable”) spoken by two male talkers. Group 1 heard Talker 1 use only F0 to signal stress (intensity and duration values were ambiguous), while Talker 2 used only intensity (F0 and duration were ambiguous). Group 2 heard the reverse talker-cue mappings. After training, participants were tested on words from both talkers containing conflicting stress cues (“mixed items”; e.g., one spoken by Talker 1 with F0 signaling initial stress and intensity signaling final stress). We found that listeners used previously learned information about which talker used which cue to interpret the mixed items. For example, the mixed item described above tended to be interpreted as having initial stress by Group 1 but as having final stress by Group 2. This demonstrates that listeners learn how individual talkers signal stress and use that knowledge in spoken-word recognition.
  • Seyfried, F., & Udden, J. (2023). Phonotactics and syntax: Investigating functional specialisation during structured sequence processing. Language, Cognition and Neuroscience, 38(3), 346-358. doi:10.1080/23273798.2022.2116462.

    Abstract

    Frontal lobe organisation displays a functional gradient, with overarching processing goals located in parts anterior to more subordinate goals, processed more posteriorly. Functional specialisation for syntax and phonology within language relevant areas has been supported by meta-analyses and reviews, but never directly tested experimentally. We tested for organised functional specialisation by manipulating syntactic case and phonotactics, creating violations at the end of otherwise matched and predictable sentences. Both violations led to increased activation in expected language regions. We observe the clearest signs of a functional gradient for language processing in the medial frontal cortex, where syntactic violations activated a more anterior portion compared to the phonotactic violations. A large overlap of syntactic and phonotactic processing in the left inferior frontal gyrus (LIFG) supports the view that general structured sequence processes are located in this area. These findings are relevant for understanding how sentence processing is implemented in hierarchically organised processing steps in the frontal lobe.

    Additional information

    supplementary methods
  • Sha, Z., Schijven, D., Fisher, S. E., & Francks, C. (2023). Genetic architecture of the white matter connectome of the human brain. Science Advances, 9(7): eadd2870. doi:10.1126/sciadv.add2870.

    Abstract

    White matter tracts form the structural basis of large-scale brain networks. We applied brain-wide tractography to diffusion images from 30,810 adults (U.K. Biobank) and found significant heritability for 90 node-level and 851 edge-level network connectivity measures. Multivariate genome-wide association analyses identified 325 genetic loci, of which 80% had not been previously associated with brain metrics. Enrichment analyses implicated neurodevelopmental processes including neurogenesis, neural differentiation, neural migration, neural projection guidance, and axon development, as well as prenatal brain expression especially in stem cells, astrocytes, microglia, and neurons. The multivariate association profiles implicated 31 loci in connectivity between core regions of the left-hemisphere language network. Polygenic scores for psychiatric, neurological, and behavioral traits also showed significant multivariate associations with structural connectivity, each implicating distinct sets of brain regions with trait-relevant functional profiles. This large-scale mapping study revealed common genetic contributions to variation in the structural connectome of the human brain.
  • Siahaan, P., & Wijaya Rajeg, G. P. (2023). Multimodal language use in Indonesian: Recurrent gestures associated with negation. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527196.

    Abstract

    This paper presents research findings on manual gestures
    associated with negation in Indonesian, utilizing data sourced
    from talk shows available on YouTube. The study reveals that
    Indonesian speakers employ six recurrent negation gestures,
    which have been observed in various languages worldwide.
    This suggests that gestures exhibiting a stable form-meaning
    relationship and recurring frequently in relation to negation are
    prevalent around the globe, although their distribution may
    differ across cultures and languages. Furthermore, the paper
    demonstrates that negation gestures are not strictly tied to
    verbal negation. Overall, the aim of this paper is to contribute
    to a deeper understanding of the conventional usage and cross-
    linguistic distribution of recurrent gestures.
  • Silva, S., Inácio, F., Rocha e Sousa, D., Gaspar, N., Folia, V., & Petersson, K. M. (2023). Formal language hierarchy reflects different levels of cognitive complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(4), 642-660. doi:10.1037/xlm0001182.

    Abstract

    Formal language hierarchy describes levels of increasing syntactic complexity (adjacent dependencies, nonadjacent nested, nonadjacent crossed) of which the transcription into a hierarchy of cognitive complexity remains under debate. The cognitive foundations of formal language hierarchy have been contradicted by two types of evidence: First, adjacent dependencies are not easier to learn compared to nonadjacent; second, crossed nonadjacent dependencies may be easier than nested. However, studies providing these findings may have engaged confounds: Repetition monitoring strategies may have accounted for participants’ high performance in nonadjacent dependencies, and linguistic experience may have accounted for the advantage of crossed dependencies. We conducted two artificial grammar learning experiments where we addressed these confounds by manipulating reliance on repetition monitoring and by testing participants inexperienced with crossed dependencies. Results showed relevant differences in learning adjacent versus nonadjacent dependencies and advantages of nested over crossed, suggesting that formal language hierarchy may indeed translate into a hierarchy of cognitive complexity
  • Skirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O. and 85 moreSkirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O., Robbeets, M., Abbas, N. K., Auer, D., Bakker, N. A., Barbos, G., Borges, R. D., Danielsen, S., Dorenbusch, L., Dorn, E., Elliott, J., Falcone, G., Fischer, J., Ghanggo Ate, Y., Gibson, H., Göbel, H.-P., Goodall, J. A., Gruner, V., Harvey, A., Hayes, R., Heer, L., Herrera Miranda, R. E., Hübler, N., Huntington-Rainey, B. H., Ivani, J. K., Johns, M., Just, E., Kashima, E., Kipf, C., Klingenberg, J. V., König, N., Koti, A., Kowalik, R. G. A., Krasnoukhova, O., Lindvall, N. L. M., Lorenzen, M., Lutzenberger, H., Martins, T. R., Mata German, C., Van der Meer, S., Montoya Samamé, J., Müller, M., Muradoglu, S., Neely, K., Nickel, J., Norvik, M., Oluoch, C. A., Peacock, J., Pearey, I. O., Peck, N., Petit, S., Pieper, S., Poblete, M., Prestipino, D., Raabe, L., Raja, A., Reimringer, J., Rey, S. C., Rizaew, J., Ruppert, E., Salmon, K. K., Sammet, J., Schembri, R., Schlabbach, L., Schmidt, F. W., Skilton, A., Smith, W. D., De Sousa, H., Sverredal, K., Valle, D., Vera, J., Voß, J., Witte, T., Wu, H., Yam, S., Ye, J., Yong, M., Yuditha, T., Zariquiey, R., Forkel, R., Evans, N., Levinson, S. C., Haspelmath, M., Greenhill, S. J., Atkinson, Q., & Gray, R. D. (2023). Grambank reveals the importance of genealogical constraints on linguistic diversity and highlights the impact of language loss. Science Advances, 9(16): eadg6175. doi:10.1126/sciadv.adg6175.

    Abstract

    While global patterns of human genetic diversity are increasingly well characterized, the diversity of human languages remains less systematically described. Here, we outline the Grambank database. With over 400,000 data points and 2400 languages, Grambank is the largest comparative grammatical database available. The comprehensiveness of Grambank allows us to quantify the relative effects of genealogical inheritance and geographic proximity on the structural diversity of the world’s languages, evaluate constraints on linguistic diversity, and identify the world’s most unusual languages. An analysis of the consequences of language loss reveals that the reduction in diversity will be strikingly uneven across the major linguistic regions of the world. Without sustained efforts to document and revitalize endangered languages, our linguistic window into human history, cognition, and culture will be seriously fragmented.
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2023). Delta-band neural responses to individual words are modulated by sentence processing. The Journal of Neuroscience, 43(26), 4867-4883. doi:10.1523/JNEUROSCI.0964-22.2023.

    Abstract

    To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step towards understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory- and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition around 100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study speak to how the neural representation of words is affected by structural context, and as such provide insight into how the brain instantiates compositionality in language.
  • Slim, M. S., & Hartsuiker, R. J. (2023). Moving visual world experiments online? A web-based replication of Dijkgraaf, Hartsuiker, and Duyck (2017) using PCIbex and WebGazer.js. Behavior Research Methods, 55, 3786-3804. doi:10.3758/s13428-022-01989-z.

    Abstract

    The visual world paradigm is one of the most influential paradigms to study real-time language processing. The present study tested whether visual world studies can be moved online, using PCIbex software (Zehr & Schwarz, 2018) and the WebGazer.js algorithm (Papoutsaki et al., 2016) to collect eye-movement data. Experiment 1 was a fixation task in which the participants looked at a fixation cross in multiple positions on the computer screen. Experiment 2 was a web-based replication of a visual world experiment by Dijkgraaf et al. (2017). Firstly, both experiments revealed that the spatial accuracy of the data allowed us to distinguish looks across the four quadrants of the computer screen. This suggest that the spatial resolution of WebGazer.js is fine-grained enough for most visual world experiments (which typically involve a two-by-two quadrant-based set-up of the visual display). Secondly, both experiments revealed a delay of roughly 300 ms in the time course of the eye movements, possibly caused by the internal processing speed of the browser or WebGazer.js. This delay can be problematic in studying questions that require a fine-grained temporal resolution and requires further investigation.
  • Slim, M. S., Lauwers, P., & Hartsuiker, R. J. (2023). How abstract are logical representations? The role of verb semantics in representing quantifier scope. Glossa Psycholinguistics, 2(1): 9. doi:10.5070/G6011175.

    Abstract

    Language comprehension involves the derivation of the meaning of sentences by combining the meanings of their parts. In some cases, this can lead to ambiguity. A sentence like Every hiker climbed a hill allows two logical representations: One that specifies that every hiker climbed a different hill and one that specifies that every hiker climbed the same hill. The interpretations of such sentences can be primed: Exposure to a particular reading increases the likelihood that the same reading will be assigned to a subsequent similar sentence. Feiman and Snedeker (2016) observed that such priming is not modulated by overlap of the verb between prime and target. This indicates that mental logical representations specify the compositional structure of the sentence meaning without conceptual meaning content. We conducted a close replication of Feiman and Snedeker’s experiment in Dutch and found no verb-independent priming. Moreover, a comparison with a previous, within-verb priming experiment showed an interaction, suggesting stronger verb-specific than abstract priming. A power analysis revealed that both Feiman and Snedeker’s experiment and our Experiment 1 were underpowered. Therefore, we replicated our Experiment 1, using the sample size guidelines provided by our power analysis. This experiment again showed that priming was stronger if a prime-target pair contained the same verb. Together, our experiments show that logical representation priming is enhanced if the prime and target sentence contain the same verb. This suggests that logical representations specify compositional structure and meaning features in an integrated manner.
  • Snijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E. and 29 moreSnijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E., Keena, B., Hakonarson, H., Roos, A., Kohlschmidt, N., Cereda, A., Iascone, M., Rebessi, E., Kernohan, K. D., Campeau, P. M., Millan, F., Taylor, J. A., Lochmüller, H., Higgs, M. R., Goula, A., Bernhard, B., Velasco, D. J., Schmanski, A. A., Stark, Z., Gallacher, L., Pais, L., Marcogliese, P. C., Yamamoto, S., Raun, N., Jakub, T. E., Kramer, J. M., Den Hoed, J., Fisher, S. E., Brunner, H. G., & Kleefstra, T. (2023). A clustering of heterozygous missense variants in the crucial chromatin modifier WDR5 defines a new neurodevelopmental disorder. Human Genetics and Genomics Advances, 4(1): 100157. doi:10.1016/j.xhgg.2022.100157.

    Abstract

    WDR5 is a broadly studied, highly conserved key protein involved in a wide array of biological functions. Among these functions, WDR5 is a part of several protein complexes that affect gene regulation via post-translational modification of histones. We collected data from 11 unrelated individuals with six different rare de novo germline missense variants in WDR5; one identical variant was found in five individuals, and another variant in two individuals. All individuals had neurodevelopmental disorders including speech/language delays (N=11), intellectual disability (N=9), epilepsy (N=7) and autism spectrum disorder (N=4). Additional phenotypic features included abnormal growth parameters (N=7), heart anomalies (N=2) and hearing loss (N=2). Three-dimensional protein structures indicate that all the residues affected by these variants are located at the surface of one side of the WDR5 protein. It is predicted that five out of the six amino acid substitutions disrupt interactions of WDR5 with RbBP5 and/or KMT2A/C, as part of the COMPASS (complex proteins associated with Set1) family complexes. Our experimental approaches in Drosophila melanogaster and human cell lines show normal protein expression, localization and protein-protein interactions for all tested variants. These results, together with the clustering of variants in a specific region of WDR5 and the absence of truncating variants so far, suggest that dominant-negative or gain-of-function mechanisms might be at play. All in all, we define a neurodevelopmental disorder associated with missense variants in WDR5 and a broad range of features. This finding highlights the important role of genes encoding COMPASS family proteins in neurodevelopmental disorders.
  • Söderström, P., & Cutler, A. (2023). Early neuro-electric indication of lexical match in English spoken-word recognition. PLOS ONE, 18(5): e0285286. doi:10.1371/journal.pone.0285286.

    Abstract

    We investigated early electrophysiological responses to spoken English words embedded in neutral sentence frames, using a lexical decision paradigm. As words unfold in time, similar-sounding lexical items compete for recognition within 200 milliseconds after word onset. A small number of studies have previously investigated event-related potentials in this time window in English and French, with results differing in direction of effects as well as component scalp distribution. Investigations of spoken-word recognition in Swedish have reported an early left-frontally distributed event-related potential that increases in amplitude as a function of the probability of a successful lexical match as the word unfolds. Results from the present study indicate that the same process may occur in English: we propose that increased certainty of a ‘word’ response in a lexical decision task is reflected in the amplitude of an early left-anterior brain potential beginning around 150 milliseconds after word onset. This in turn is proposed to be connected to the probabilistically driven activation of possible upcoming word forms.

    Additional information

    The datasets are available here
  • Soheili-Nezhad, S., Sprooten, E., Tendolkar, I., & Medici, M. (2023). Exploring the genetic link between thyroid dysfunction and common psychiatric disorders: A specific hormonal or a general autoimmune comorbidity. Thyroid, 33(2), 159-168. doi:10.1089/thy.2022.0304.

    Abstract

    Background: The hypothalamus-pituitary-thyroid axis coordinates brain development and postdevelopmental function. Thyroid hormone (TH) variations, even within the normal range, have been associated with the risk of developing common psychiatric disorders, although the underlying mechanisms remain poorly understood.

    Methods: To get new insight into the potentially shared mechanisms underlying thyroid dysfunction and psychiatric disorders, we performed a comprehensive analysis of multiple phenotypic and genotypic databases. We investigated the relationship of thyroid disorders with depression, bipolar disorder (BIP), and anxiety disorders (ANXs) in 497,726 subjects from U.K. Biobank. We subsequently investigated genetic correlations between thyroid disorders, thyrotropin (TSH), and free thyroxine (fT4) levels, with the genome-wide factors that predispose to psychiatric disorders. Finally, the observed global genetic correlations were furthermore pinpointed to specific local genomic regions.

    Results: Hypothyroidism was positively associated with an increased risk of major depressive disorder (MDD; OR = 1.31, p = 5.29 × 10−89), BIP (OR = 1.55, p = 0.0038), and ANX (OR = 1.16, p = 6.22 × 10−8). Hyperthyroidism was associated with MDD (OR = 1.11, p = 0.0034) and ANX (OR = 1.34, p = 5.99 × 10−⁶). Genetically, strong coheritability was observed between thyroid disease and both major depressive (rg = 0.17, p = 2.7 × 10−⁴) and ANXs (rg = 0.17, p = 6.7 × 10−⁶). This genetic correlation was particularly strong at the major histocompatibility complex locus on chromosome 6 (p < 10−⁵), but further analysis showed that other parts of the genome also contributed to this global effect. Importantly, neither TSH nor fT4 levels were genetically correlated with mood disorders.

    Conclusions: Our findings highlight an underlying association between autoimmune hypothyroidism and mood disorders, which is not mediated through THs and in which autoimmunity plays a prominent role. While these findings could shed new light on the potential ineffectiveness of treating (minor) variations in thyroid function in psychiatric disorders, further research is needed to identify the exact underlying molecular mechanisms.

    Additional information

    supplementary table S1
  • Sollis, E., Den Hoed, J., Quevedo, M., Estruch, S. B., Vino, A., Dekkers, D. H. W., Demmers, J. A. A., Poot, R., Derizioti, P., & Fisher, S. E. (2023). Characterization of the TBR1 interactome: Variants associated with neurodevelopmental disorders disrupt novel protein interactions. Human Molecular Genetics, 32(9): ddac311, pp. 1497-1510. doi:10.1093/hmg/ddac311.

    Abstract

    TBR1 is a neuron-specific transcription factor involved in brain development and implicated in a neurodevelopmental disorder (NDD) combining features of autism spectrum disorder (ASD), intellectual disability (ID) and speech delay. TBR1 has been previously shown to interact with a small number of transcription factors and co-factors also involved in NDDs (including CASK, FOXP1/2/4 and BCL11A), suggesting that the wider TBR1 interactome may have a significant bearing on normal and abnormal brain development. Here we have identified approximately 250 putative TBR1-interaction partners by affinity purification coupled to mass spectrometry. As well as known TBR1-interactors such as CASK, the identified partners include transcription factors and chromatin modifiers, along with ASD- and ID-related proteins. Five interaction candidates were independently validated using bioluminescence resonance energy transfer assays. We went on to test the interaction of these candidates with TBR1 protein variants implicated in cases of NDD. The assays uncovered disturbed interactions for NDD-associated variants and identified two distinct protein-binding domains of TBR1 that have essential roles in protein–protein interaction.
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2023). Close encounters of the word kind: Attested distributional information boosts statistical learning. Language Learning, 73(2), 341-373. doi:10.1111/lang.12523.

    Abstract

    Statistical learning, the ability to extract regularities from input (e.g., in language), is likely supported by learners’ prior expectations about how component units co-occur. In this study, we investigated how adults’ prior experience with sublexical regularities in their native language influences performance on an empirical language learning task. Forty German-speaking adults completed a speech repetition task in which they repeated eight-syllable sequences from two experimental languages: one containing disyllabic words comprised of frequently occurring German syllable transitions (naturalistic words) and the other containing words made from unattested syllable transitions (non-naturalistic words). The participants demonstrated learning from both naturalistic and non-naturalistic stimuli. However, learning was superior for the naturalistic sequences, indicating that the participants had used their existing distributional knowledge of German to extract the naturalistic words faster and more accurately than the non-naturalistic words. This finding supports theories of statistical learning as a form of chunking, whereby frequently co-occurring units become entrenched in long-term memory.

    Additional information

    accessible summary appendix S1
  • Stern, G. (2023). On embodied use of recognitional demonstratives. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527204.

    Abstract

    This study focuses on embodied uses of recognitional
    demonstratives. While multimodal conversation analytic
    studies have shown how gesture and speech interact in the
    elaboration of exophoric references, little attention has been
    given to the multimodal configuration of other types of
    referential actions. Based on a video-recorded corpus of
    professional meetings held in French, this qualitative study
    shows that a subtype of deictic references, namely recognitional
    references, are frequently associated with iconic gestures, thus
    challenging the traditional distinction between exophoric and
    endophoric uses of deixis.
  • Stivers, T., Rossi, G., & Chalfoun, A. (2023). Ambiguities in action ascription. Social Forces, 101(3), 1552-1579. doi:10.1093/sf/soac021.

    Abstract

    In everyday interactions with one another, speakers not only say things but also do things like offer, complain, reject, and compliment. Through observation, it is possible to see that much of the time people unproblematically understand what others are doing. Research on conversation has further documented how speakers’ word choice, prosody, grammar, and gesture all help others to recognize what actions they are performing. In this study, we rely on spontaneous naturally occurring conversational data where people have trouble making their actions understood to examine what leads to ambiguous actions, bringing together prior research and identifying recurrent types of ambiguity that hinge on different dimensions of social action. We then discuss the range of costs and benefits for social actors when actions are clear versus ambiguous. Finally, we offer a conceptual model of how, at a microlevel, action ascription is done. Actions in interaction are building blocks for social relations; at each turn, an action can strengthen or strain the bond between two individuals. Thus, a unified theory of action ascription at a microlevel is an essential component for broader theories of social action and of how social actions produce, maintain, and revise the social world.
  • Tamaoka, K., Sakai, H., Miyaoka, Y., Ono, H., Fukuda, M., Wu, Y., & Verdonschot, R. G. (2023). Sentential inference bridging between lexical/grammatical knowledge and text comprehension among native Chinese speakers learning Japanese. PLoS One, 18(4): e0284331. doi:10.1371/journal.pone.0284331.

    Abstract

    The current study explored the role of sentential inference in connecting lexical/grammatical knowledge and overall text comprehension in foreign language learning. Using structural equation modeling (SEM), causal relationships were examined between four latent variables: lexical knowledge, grammatical knowledge, sentential inference, and text comprehension. The study analyzed 281 Chinese university students learning Japanese as a second language and compared two causal models: (1) the partially-mediated model, which suggests that lexical knowledge, grammatical knowledge, and sentential inference concurrently influence text comprehension, and (2) the wholly-mediated model, which posits that both lexical and grammatical knowledge impact sentential inference, which then further affects text comprehension. The SEM comparison analysis supported the wholly-mediated model, showing sequential causal relationships from lexical knowledge to sentential inference and then to text comprehension, without significant contribution from grammatical knowledge. The results indicate that sentential inference serves as a crucial bridge between lexical knowledge and text comprehension.
  • Tamaoka, K., Zhang, J., Koizumi, M., & Verdonschot, R. G. (2023). Phonological encoding in Tongan: An experimental investigation. Quarterly Journal of Experimental Psychology, 76(10), 2197-2430. doi:10.1177/17470218221138770.

    Abstract

    This study is the first to report chronometric evidence on Tongan language production. It has been speculated that the mora plays an important role during Tongan phonological encoding. A mora follows the (C)V form, so /a/ and /ka/ (but not /k/) denote a mora in Tongan. Using a picture-word naming paradigm, Tongan native speakers named pictures containing superimposed non-word distractors. This task has been used before in Japanese, Korean, and Vietnamese to investigate the initially selected unit during phonological encoding (IPU). Compared to control distractors, both onset and mora overlapping distractors resulted in faster naming latencies. Several alternative explanations for the pattern of results - proficiency in English, knowledge of Latin script, and downstream effects - are discussed. However, we conclude that Tongan phonological encoding likely natively uses the phoneme, and not the mora, as the IPU..

    Additional information

    supplemental material

Share this page