Publications

Displaying 101 - 200 of 505
  • Duengen, D., Polotzek, M., O'Sullivan, E., & Ravignani, A. (2024). Anecdotal observations of socially learned vocalizations in harbor seals. Animal Behavior and Cognition, 11, 393-403. doi:10.26451/abc.11.03.04.2024.

    Abstract

    Harbor seals (Phoca vitulina) are more solitary than many other pinnipeds. Yet, they are capable of vocal learning, a form of social learning. Most extant literature examines social animals when investigating social learning, despite sociality not being a prerequisite. Here, we report two formerly silent harbor seals who initiated vocalizations, after having repeatedly observed a conspecific receiving food rewards for vocalizing. Our observations suggest both social and vocal learning in a group of captive harbor seals, a species that lives semi-solitarily in the wild. We propose that, in this case, social learning acted as a shortcut to acquiring food rewards compared to the comparatively costly asocial learning.
  • Duffield, N., & Matsuo, A. (2003). Factoring out the parallelism effect in ellipsis: An interactional approach? In J. Chilar, A. Franklin, D. Keizer, & I. Kimbara (Eds.), Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society (CLS) (pp. 591-603). Chicago: Chicago Linguistics Society.

    Abstract

    Traditionally, there have been three standard assumptions made about the Parallelism Effect on VP-ellipsis, namely that the effect is categorical, that it applies asymmetrically and that it is uniquely due to syntactic factors. Based on the results of a series of experiments involving online and offline tasks, it will be argued that the Parallelism Effect is instead noncategorical and interactional. The factors investigated include construction type, conceptual and morpho-syntactic recoverability, finiteness and anaphor type (to test VP-anaphora). The results show that parallelism is gradient rather than categorical, effects both VP-ellipsis and anaphora, and is influenced by both structural and non-structural factors.
  • Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Vocal usage learning and vocal comprehension learning in harbor seals. BMC Neuroscience, 25: 48. doi:10.1186/s12868-024-00899-4.

    Abstract

    Background

    Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning.

    Results

    We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning.

    Conclusions

    It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning.
  • Dunn, M. (2003). Pioneers of Island Melanesia project. Oceania Newsletter, 30/31, 1-3.
  • Dunn, M., Levinson, S. C., Lindström, E., Reesink, G., & Terrill, A. (2003). Island Melanesia elicitation materials. Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.885547.

    Abstract

    The Island Melanesia project was initiated to collect data on the little-known Papuan languages of Island Melanesia, and to explore the origins of and relationships between these languages. The project materials from the 2003 field season focus on language related to cultural domains (e.g., material culture) and on targeted grammatical description. Five tasks are included: Proto-Oceanic lexicon, Grammatical questionnaire and lexicon, Kinship questionnaire, Domains of likely pre-Austronesian terminology, and Botanical collection questionnaire.
  • Eekhof, L. S. (2024). Reading the mind: The relationship between social cognition and narrative processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eekhof, L. S., & Mar, R. A. (2024). Does reading about fictional minds make us more curious about real ones? Language and Cognition, 16(1), 176-196. doi:10.1017/langcog.2023.30.

    Abstract

    Although there is a large body of research assessing whether exposure to narratives boosts social cognition immediately afterward, not much research has investigated the underlying mechanism of this putative effect. This experiment investigates the possibility that reading a narrative increases social curiosity directly afterward, which might explain the short-term boosts in social cognition reported by some others. We developed a novel measure of state social curiosity and collected data from participants (N = 222) who were randomly assigned to read an excerpt of narrative fiction or expository nonfiction. Contrary to our expectations, we found that those who read a narrative exhibited less social curiosity afterward than those who read an expository text. This result was not moderated by trait social curiosity. An exploratory analysis uncovered that the degree to which texts present readers with social targets predicted less social curiosity. Our experiment demonstrates that reading narratives, or possibly texts with social content in general, may engage and fatigue social-cognitive abilities, causing a temporary decrease in social curiosity. Such texts might also temporarily satisfy the need for social connection, temporarily reducing social curiosity. Both accounts are in line with theories describing how narratives result in better social cognition over the long term.
  • Eising, E., Vino, A., Mabie, H. L., Campbell, T. F., Shriberg, L. D., & Fisher, S. E. (2024). Genome sequencing of idiopathic speech delay. Human Mutation, 2024: 9692863. doi:10.1155/2024/9692863.

    Abstract

    Genetic investigations of people with speech and language disorders can provide windows into key aspects of human biology. Most genomic research into impaired speech development has so far focused on childhood apraxia of speech (CAS), a rare neurodevelopmental disorder characterized by difficulties with coordinating rapid fine motor sequences that underlie proficient speech. In 2001, pathogenic variants of FOXP2 provided the first molecular genetic accounts of CAS aetiology. Since then, disruptions in several other genes have been implicated in CAS, with a substantial proportion of cases being explained by high-penetrance variants. However, the genetic architecture underlying other speech-related disorders remains less well understood. Thus, in the present study, we used systematic DNA sequencing methods to investigate idiopathic speech delay, as characterized by delayed speech development in the absence of a motor speech diagnosis (such as CAS), a language/reading disorder, or intellectual disability. We performed genome sequencing in a cohort of 23 children with a rigorous diagnosis of idiopathic speech delay. For roughly half of the sample (ten probands), sufficient DNA was also available for genome sequencing in both parents, allowing discovery of de novo variants. In the thirteen singleton probands, we focused on identifying loss-of-function and likely damaging missense variants in genes intolerant to such mutations. We found that one speech delay proband carried a pathogenic frameshift deletion in SETD1A, a gene previously implicated in a broader variable monogenic syndrome characterized by global developmental problems including delayed speech and/or language development, mild intellectual disability, facial dysmorphisms, and behavioural and psychiatric symptoms. Of note, pathogenic SETD1A variants have been independently reported in children with CAS in two separate studies. In other probands in our speech delay cohort, likely pathogenic missense variants were identified affecting highly conserved amino acids in key functional domains of SPTBN1 and ARF3. Overall, this study expands the phenotype spectrum associated with pathogenic SETD1A variants, to also include idiopathic speech delay without CAS or intellectual disability, and suggests additional novel potential candidate genes that may harbour high-penetrance variants that can disrupt speech development.

    Additional information

    supplemental table
  • Ekerdt, C., Menks, W. M., Fernández, G., McQueen, J. M., Takashima, A., & Janzen, G. (2024). White matter connectivity linked to novel word learning in children. Brain Structure & Function, 229, 2461-2477. doi:10.1007/s00429-024-02857-6.

    Abstract

    Children and adults are excellent word learners. Increasing evidence suggests that the neural mechanisms that allow us to learn words change with age. In a recent fMRI study from our group, several brain regions exhibited age-related differences when accessing newly learned words in a second language (L2; Takashima et al. Dev Cogn Neurosci 37, 2019). Namely, while the Teen group (aged 14–16 years) activated more left frontal and parietal regions, the Young group (aged 8–10 years) activated right frontal and parietal regions. In the current study we analyzed the structural connectivity data from the aforementioned study, examining the white matter connectivity of the regions that showed age-related functional activation differences. Age group differences in streamline density as well as correlations with L2 word learning success and their interaction were examined. The Teen group showed stronger connectivity than the Young group in the right arcuate fasciculus (AF). Furthermore, white matter connectivity and memory for L2 words across the two age groups correlated in the left AF and the right anterior thalamic radiation (ATR) such that higher connectivity in the left AF and lower connectivity in the right ATR was related to better memory for L2 words. Additionally, connectivity in the area of the right AF that exhibited age-related differences predicted word learning success. The finding that across the two age groups, stronger connectivity is related to better memory for words lends further support to the hypothesis that the prolonged maturation of the prefrontal cortex, here in the form of structural connectivity, plays an important role in the development of memory.

    Additional information

    supplementary information
  • Enfield, N. J. (2003). Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos. Journal of Linguistic Anthropology, 13(1), 7-50. doi:10.1525/jlin.2003.13.1.7.

    Abstract

    This article presents a description of two sequences of talk by urban speakers of Lao (a southwestern Tai language spoken in Laos) in which co-speech gesture plays a central role in explanations of kinship relations and terminology. The speakers spontaneously use hand gestures and gaze to spatially diagram relationships that have no inherent spatial structure. The descriptive sections of the article are prefaced by a discussion of the semiotic complexity of illustrative gestures and gesture diagrams. Gestured signals feature iconic, indexical, and symbolic components, usually in combination, as well as using motion and three-dimensional space to convey meaning. Such diagrams show temporal persistence and structural integrity despite having been projected in midair by evanescent signals (i.e., handmovements anddirected gaze). Speakers sometimes need or want to revise these spatial representations without destroying their structural integrity. The need to "edit" gesture diagrams involves such techniques as hold-and-drag, hold-and-work-with-free-hand, reassignment-of-old-chunk-tonew-chunk, and move-body-into-new-space.
  • Enfield, N. J. (2003). The definition of WHAT-d'you-call-it: Semantics and pragmatics of 'recognitional deixis'. Journal of Pragmatics, 35(1), 101-117. doi:10.1016/S0378-2166(02)00066-8.

    Abstract

    Words such as what -d'you-call-it raise issues at the heart of the semantics/pragmatics interface. Expressions of this kind are conventionalised and have meanings which, while very general, are explicitly oriented to the interactional nature of the speech context, drawing attention to a speaker's assumption that the listener can figure out what the speaker is referring to. The details of such meanings can account for functional contrast among similar expressions, in a single language as well as cross-linguistically. The English expressions what -d'you-call-it and you-know-what are compared, along with a comparable Lao expression meaning, roughly, ‘that thing’. Proposed definitions of the meanings of these expressions account for their different patterns of use. These definitions include reference to the speech act participants, a point which supports the view that what -d'you-call-it words can be considered deictic. Issues arising from the descriptive section of this paper include the question of how such terms are derived, as well as their degree of conventionality.
  • Enfield, N. J. (2003). “Fish traps” task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877616.

    Abstract

    This task is designed to elicit virtual 3D ‘models’ created in gesture space using iconic and other representational gestures. This task has been piloted with Lao speakers, where two speakers were asked to explain the meaning of terms referring to different kinds of fish trap mechanisms. The task elicited complex performances involving a range of iconic gestures, and with especially interesting use of (a) the ‘model/diagram’ in gesture space as a virtual object, (b) the non-dominant hand as a prosodic/semiotic anchor, (c) a range of different techniques (indexical and iconic) for evoking meaning with the hand, and (d) the use of nearby objects and parts of the body as semiotic ‘props’.
  • Enfield, N. J. (2003). Demonstratives in space and interaction: Data from Lao speakers and implications for semantic analysis. Language, 79(1), 82-117.

    Abstract

    The semantics of simple (i.e. two-term) systems of demonstratives have in general hitherto been treated as inherently spatial and as marking a symmetrical opposition of distance (‘proximal’ versus ‘distal’), assuming the speaker as a point of origin. More complex systems are known to add further distinctions, such as visibility or elevation, but are assumed to build on basic distinctions of distance. Despite their inherently context-dependent nature, little previous work has based the analysis of demonstratives on evidence of their use in real interactional situations. In this article, video recordings of spontaneous interaction among speakers of Lao (Southwestern Tai, Laos) are examined in an analysis of the two Lao demonstrative determiners nii4 and nan4. A hypothesis of minimal encoded semantics is tested against rich contextual information, and the hypothesis is shown to be consistent with the data. Encoded conventional meanings must be kept distinct from contingent contextual information and context-dependent pragmatic implicatures. Based on examples of the two Lao demonstrative determiners in exophoric uses, the following claims are made. The term nii4 is a semantically general demonstrative, lacking specification of ANY spatial property (such as location or distance). The term nan4 specifies that the referent is ‘not here’ (encoding ‘location’ but NOT ‘distance’). Anchoring the semantic specification in a deictic primitive ‘here’ allows a strictly discrete intensional distinction to be mapped onto an extensional range of endless elasticity. A common ‘proximal’ spatial interpretation for the semantically more general term nii4 arises from the paradigmatic opposition of the two demonstrative determiners. This kind of analysis suggests a reappraisal of our general understanding of the semantics of demonstrative systems universally. To investigate the question in sufficient detail, however, rich contextual data (preferably collected on video) is necessary
  • Enfield, N. J. (2003). Linguistic epidemiology: Semantics and grammar of language contact in mainland Southeast Asia. London: Routledge Curzon.
  • Enfield, N. J. (Ed.). (2003). Field research manual 2003, part I: Multimodal interaction, space, event representation. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Enfield, N. J., De Ruiter, J. P., Levinson, S. C., & Stivers, T. (2003). Multimodal interaction in your field site: A preliminary investigation. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 10-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877638.

    Abstract

    Research on video- and audio-recordings of spontaneous naturally-occurring conversation in English has shown that conversation is a rule-guided, practice-oriented domain that can be investigated for its underlying mechanics or structure. Systematic study could yield something like a grammar for conversation. The goal of this task is to acquire a corpus of video-data, for investigating the underlying structure(s) of interaction cross-linguistically and cross-culturally
  • Enfield, N. J., & Levinson, S. C. (2003). Interview on kinship. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 64-65). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877629.

    Abstract

    We want to know how people think about their field of kin, on the supposition that it is quasi-spatial. To get some insights here, we need to video a discussion about kinship reckoning, the kinship system, marriage rules and so on, with a view to looking at both the linguistic expressions involved, and the gestures people use to indicate kinship groups and relations. Unlike the task in the 2001 manual, this task is a direct interview method.
  • Enfield, N. J. (2003). Introduction. In N. J. Enfield, Linguistic epidemiology: Semantics and grammar of language contact in mainland Southeast Asia (pp. 2-44). London: Routledge Curzon.
  • Enfield, N. J., & De Ruiter, J. P. (2003). The diff-task: A symmetrical dyadic multimodal interaction task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 17-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877635.

    Abstract

    This task is a complement to the questionnaire ‘Multimodal interaction in your field site: a preliminary investigation’. The objective of the task is to obtain high quality video data on structured and symmetrical dyadic multimodal interaction. The features of interaction we are interested in include turn organization in speech and nonverbal behavior, eye-gaze behavior, use of composite signals (i.e. communicative units of speech-combined-with-gesture), and linguistic and other resources for ‘navigating’ interaction (e.g. words like okay, now, well, and um).

    Additional information

    2003_1_The_diff_task_stimuli.zip
  • Enfield, N. J. (2003). Preface and priorities. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Engelen, M. M., Franken, M.-C.-J.-P., Stipdonk, L. W., Horton, S. E., Jackson, V. E., Reilly, S., Morgan, A. T., Fisher, S. E., Van Dulmen, S., & Eising, E. (2024). The association between stuttering burden and psychosocial aspects of life in adults. Journal of Speech, Language, and Hearing Research, 67(5), 1385-1399. doi:10.1044/2024_JSLHR-23-00562.

    Abstract

    Purpose:
    Stuttering is a speech condition that can have a major impact on a person's quality of life. This descriptive study aimed to identify subgroups of people who stutter (PWS) based on stuttering burden and to investigate differences between these subgroups on psychosocial aspects of life.

    Method:
    The study included 618 adult participants who stutter. They completed a detailed survey examining stuttering symptomatology, impact of stuttering on anxiety, education and employment, experience of stuttering, and levels of depression, anxiety, and stress. A two-step cluster analytic procedure was performed to identify subgroups of PWS, based on self-report of stuttering frequency, severity, affect, and anxiety, four measures that together inform about stuttering burden.

    Results:
    We identified a high- (n = 230) and a low-burden subgroup (n = 372). The high-burden subgroup reported a significantly higher impact of stuttering on education and employment, and higher levels of general depression, anxiety, stress, and overall impact of stuttering. These participants also reported that they trialed more different stuttering therapies than those with lower burden.

    Conclusions:
    Our results emphasize the need to be attentive to the diverse experiences and needs of PWS, rather than treating them as a homogeneous group. Our findings also stress the importance of personalized therapeutic strategies for individuals with stuttering, considering all aspects that could influence their stuttering burden. People with high-burden stuttering might, for example, have a higher need for psychological therapy to reduce stuttering-related anxiety. People with less emotional reactions but severe speech distortions may also have a moderate to high burden, but they may have a higher need for speech techniques to communicate with more ease. Future research should give more insights into the therapeutic needs of people highly burdened by their stuttering.
  • Erkelens, M. (2003). The semantic organization of "cut" and "break" in Dutch: A developmental study. Master Thesis, Free University Amsterdam, Amsterdam.
  • Ernestus, M., & Baayen, R. H. (2003). Predicting the unpredictable: The phonological interpretation of neutralized segments in Dutch. Language, 79(1), 5-38.

    Abstract

    Among the most fascinating data for phonology are those showing how speakers incorporate new words and foreign words into their language system, since these data provide cues to the actual principles underlying language. In this article, we address how speakers deal with neutralized obstruents in new words. We formulate four hypotheses and test them on the basis of Dutch word-final obstruents, which are neutral for [voice]. Our experiments show that speakers predict the characteristics ofneutralized segments on the basis ofphonologically similar morphemes stored in the mental lexicon. This effect of the similar morphemes can be modeled in several ways. We compare five models, among them STOCHASTIC OPTIMALITY THEORY and ANALOGICAL MODELING OF LANGUAGE; all perform approximately equally well, but they differ in their complexity, with analogical modeling oflanguage providing the most economical explanation.
  • Ernestus, M. (2003). The role of phonology and phonetics in Dutch voice assimilation. In J. v. d. Weijer, V. J. v. Heuven, & H. v. d. Hulst (Eds.), The phonological spectrum Volume 1: Segmental structure (pp. 119-144). Amsterdam: John Benjamins.
  • Evans, M. J., Clough, S., Duff, M. C., & Brown‐Schmidt, S. (2024). Temporal organization of narrative recall is present but attenuated in adults with hippocampal amnesia. Hippocampus, 34(8), 438-451. doi:10.1002/hipo.23620.

    Abstract

    Studies of the impact of brain injury on memory processes often focus on the quantity and episodic richness of those recollections. Here, we argue that the organization of one's recollections offers critical insights into the impact of brain injury on functional memory. It is well-established in studies of word list memory that free recall of unrelated words exhibits a clear temporal organization. This temporal contiguity effect refers to the fact that the order in which word lists are recalled reflects the original presentation order. Little is known, however, about the organization of recall for semantically rich materials, nor how recall organization is impacted by hippocampal damage and memory impairment. The present research is the first study, to our knowledge, of temporal organization in semantically rich narratives in three groups: (1) Adults with bilateral hippocampal damage and severe declarative memory impairment, (2) adults with bilateral ventromedial prefrontal cortex (vmPFC) damage and no memory impairment, and (3) demographically matched non-brain-injured comparison participants. We find that although the narrative recall of adults with bilateral hippocampal damage reflected the temporal order in which those narratives were experienced above chance levels, their temporal contiguity effect was significantly attenuated relative to comparison groups. In contrast, individuals with vmPFC damage did not differ from non-brain-injured comparison participants in temporal contiguity. This pattern of group differences yields insights into the cognitive and neural systems that support the use of temporal organization in recall. These data provide evidence that the retrieval of temporal context in narrative recall is hippocampal-dependent, whereas damage to the vmPFC does not impair the temporal organization of narrative recall. This evidence of limited but demonstrable organization of memory in participants with hippocampal damage and amnesia speaks to the power of narrative structures in supporting meaningfully organized recall despite memory impairment.

    Additional information

    supporting information
  • Ge, R., Yu, Y., Qi, Y. X., Fan, Y.-n., Chen, S., Gao, C., Haas, S. S., New, F., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buckner, R., Caseras, X., Crivello, F., Crone, E. A., Erk, S., Fisher, S. E., Franke, B., Glahn, D. C., Dannlowski, U. Ge, R., Yu, Y., Qi, Y. X., Fan, Y.-n., Chen, S., Gao, C., Haas, S. S., New, F., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buckner, R., Caseras, X., Crivello, F., Crone, E. A., Erk, S., Fisher, S. E., Franke, B., Glahn, D. C., Dannlowski, U., Grotegerd, D., Gruber, O., Hulshoff Pol, H. E., Schumann, G., Tamnes, C. K., Walter, H., Wierenga, L. M., Jahanshad, N., Thompson, P. M., Frangou, S., & ENIGMA Lifespan Working Group (2024). Normative modelling of brain morphometry across the lifespan with CentileBrain: Algorithm benchmarking and model optimisation. The Lancet Digital Health, 6(3), e211-e221. doi:10.1016/S2589-7500(23)00250-9.

    Abstract

    The value of normative models in research and clinical practice relies on their robustness and a systematic comparison of different modelling algorithms and parameters; however, this has not been done to date. We aimed to identify the optimal approach for normative modelling of brain morphometric data through systematic empirical benchmarking, by quantifying the accuracy of different algorithms and identifying parameters that optimised model performance. We developed this framework with regional morphometric data from 37 407 healthy individuals (53% female and 47% male; aged 3–90 years) from 87 datasets from Europe, Australia, the USA, South Africa, and east Asia following a comparative evaluation of eight algorithms and multiple covariate combinations pertaining to image acquisition and quality, parcellation software versions, global neuroimaging measures, and longitudinal stability. The multivariate fractional polynomial regression (MFPR) emerged as the preferred algorithm, optimised with non-linear polynomials for age and linear effects of global measures as covariates. The MFPR models showed excellent accuracy across the lifespan and within distinct age-bins and longitudinal stability over a 2-year period. The performance of all MFPR models plateaued at sample sizes exceeding 3000 study participants. This model can inform about the biological and behavioural implications of deviations from typical age-related neuroanatomical changes and support future study designs. The model and scripts described here are freely available through CentileBrain.
  • Feller, J. J., Duff, M. C., Clough, S., Jacobson, G. P., Roberts, R. A., & Romero, D. J. (2024). Evidence of peripheral vestibular impairment among adults with chronic moderate–severe traumatic brain injury. American Journal of Audiology, 33, 1118-1134. doi:10.1044/2024_AJA-24-00058.

    Abstract

    Purpose:
    Traumatic brain injury (TBI) is a leading cause of death and disability among adults in the United States. There is evidence to suggest the peripheral vestibular system is vulnerable to damage in individuals with TBI. However, there are limited prospective studies that describe the type and frequency of vestibular impairment in individuals with chronic moderate–severe TBI (> 6 months postinjury).

    Method:
    Cervical and ocular vestibular evoked myogenic potentials (VEMPs) and video head impulse test (vHIT) were used to assess the function of otolith organ and horizontal semicircular canal (hSCC) pathways in adults with chronic moderate–severe TBI and in noninjured comparison (NC) participants. Self-report questionnaires were administered to participants with TBI to determine prevalence of vestibular symptoms and quality of life associated with those symptoms.

    Results:
    Chronic moderate–severe TBI was associated with a greater degree of impairment in otolith organ, rather than hSCC, pathways. About 63% of participants with TBI had abnormal VEMP responses, compared to only ~10% with abnormal vHIT responses. The NC group had significantly less abnormal VEMP responses (~7%), while none of the NC participants had abnormal vHIT responses. As many as 80% of participants with TBI reported vestibular symptoms, and up to 36% reported that these symptoms negatively affected their quality of life.

    Conclusions:
    Adults with TBI reported vestibular symptoms and decreased quality of life related to those symptoms and had objective evidence of peripheral vestibular impairment. Vestibular testing for adults with chronic TBI who report persistent dizziness and imbalance may serve as a guide for treatment and rehabilitation in these individuals.
  • Felser, C., Roberts, L., Marinis, T., & Gross, R. (2003). The processing of ambiguous sentences by first and second language learners of English. Applied Psycholinguistics, 24(3), 453-489.

    Abstract

    This study investigates the way adult second language (L2) learners of English resolve relative clause attachment ambiguities in sentences such as The dean liked the secretary of the professor who was reading a letter. Two groups of advanced L2 learners of English with Greek or German as their first language participated in a set of off-line and on-line tasks. The results indicate that the L2 learners do not process ambiguous sentences of this type in the same way as adult native speakers of English do. Although the learners’ disambiguation preferences were influenced by lexical–semantic properties of the preposition linking the two potential antecedent noun phrases (of vs. with), there was no evidence that they applied any phrase structure–based ambiguity resolution strategies of the kind that have been claimed to influence sentence processing in monolingual adults. The L2 learners’ performance also differs markedly from the results obtained from 6- to 7-year-old monolingual English children in a parallel auditory study, in that the children’s attachment preferences were not affected by the type of preposition at all. We argue that children, monolingual adults, and adult L2 learners differ in the extent to which they are guided by phrase structure and lexical–semantic information during sentence processing.
  • Fisher, S. E., Lai, C. S., & Monaco, a. A. P. (2003). Deciphering the genetic basis of speech and language disorders. Annual Review of Neuroscience, 26, 57-80. doi:10.1146/annurev.neuro.26.041002.131144.

    Abstract

    A significant number of individuals have unexplained difficulties with acquiring normal speech and language, despite adequate intelligence and environmental stimulation. Although developmental disorders of speech and language are heritable, the genetic basis is likely to involve several, possibly many, different risk factors. Investigations of a unique three-generation family showing monogenic inheritance of speech and language deficits led to the isolation of the first such gene on chromosome 7, which encodes a transcription factor known as FOXP2. Disruption of this gene causes a rare severe speech and language disorder but does not appear to be involved in more common forms of language impairment. Recent genome-wide scans have identified at least four chromosomal regions that may harbor genes influencing the latter, on chromosomes 2, 13, 16, and 19. The molecular genetic approach has potential for dissecting neurological pathways underlying speech and language disorders, but such investigations are only just beginning.
  • Fisher, S. E. (2003). The genetic basis of a severe speech and language disorder. In J. Mallet, & Y. Christen (Eds.), Neurosciences at the postgenomic era (pp. 125-134). Heidelberg: Springer.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2024). Neurobiological causal models of language processing. Neurobiology of Language, 5(1), 225-247. doi:10.1162/nol_a_00133.

    Abstract

    The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the “machine language” of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
  • Forkel, S. J., & Hagoort, P. (2024). Redefining language networks: Connectivity beyond localised regions. Brain Structure & Function, 229, 2073-2078. doi:10.1007/s00429-024-02859-4.
  • Frances, C. (2024). Good enough processing: What have we learned in the 20 years since Ferreira et al. (2002)? Frontiers in Psychology, 15: 1323700. doi:10.3389/fpsyg.2024.1323700.

    Abstract

    Traditionally, language processing has been thought of in terms of complete processing of the input. In contrast to this, Ferreira and colleagues put forth the idea of good enough processing. The proposal was that during everyday processing, ambiguities remain unresolved, we rely on heuristics instead of full analyses, and we carry out deep processing only if we need to for the task at hand. This idea has gathered substantial traction since its conception. In the current work, I review the papers that have tested the three key claims of good enough processing: ambiguities remain unresolved and underspecified, we use heuristics to parse sentences, and deep processing is only carried out if required by the task. I find mixed evidence for these claims and conclude with an appeal to further refinement of the claims and predictions of the theory.
  • He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology, 77(8), 1745-1769. doi:10.1177/17470218231219971.

    Abstract

    Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement.
  • Francks, C., DeLisi, L. E., Fisher, S. E., Laval, S. H., Rue, J. E., Stein, J. F., & Monaco, A. P. (2003). Confirmatory evidence for linkage of relative hand skill to 2p12-q11 [Letter to the editor]. American Journal of Human Genetics, 72(2), 499-502. doi:10.1086/367548.
  • Francks, C., Fisher, S. E., Marlow, A. J., MacPhie, I. L., Taylor, K. E., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2003). Familial and genetic effects on motor coordination, laterality, and reading-related cognition. American Journal of Psychiatry, 160(11), 1970-1977. doi:10.1176/appi.ajp.160.11.1970.

    Abstract

    OBJECTIVE: Recent research has provided evidence for a genetically mediated association between language or reading-related cognitive deficits and impaired motor coordination. Other studies have identified relationships between lateralization of hand skill and cognitive abilities. With a large sample, the authors aimed to investigate genetic relationships between measures of reading-related cognition, hand motor skill, and hand skill lateralization.

    METHOD: The authors applied univariate and bivariate correlation and familiality analyses to a range of measures. They also performed genomewide linkage analysis of hand motor skill in a subgroup of 195 sibling pairs.

    RESULTS: Hand motor skill was significantly familial (maximum heritability=41%), as were reading-related measures. Hand motor skill was weakly but significantly correlated with reading-related measures, such as nonword reading and irregular word reading. However, these correlations were not significantly familial in nature, and the authors did not observe linkage of hand motor skill to any chromosomal regions implicated in susceptibility to dyslexia. Lateralization of hand skill was not correlated with reading or cognitive ability.

    CONCLUSIONS: The authors confirmed a relationship between lower motor ability and poor reading performance. However, the genetic effects on motor skill and reading ability appeared to be largely or wholly distinct, suggesting that the correlation between these traits may have arisen from environmental influences. Finally, the authors found no evidence that reading disability and/or low general cognitive ability were associated with ambidexterity.
  • Francks, C., DeLisi, L. E., Shaw, S. H., Fisher, S. E., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2003). Parent-of-origin effects on handedness and schizophrenia susceptibility on chromosome 2p12-q11. Human Molecular Genetics, 12(24), 3225-3230. doi:10.1093/hmg/ddg362.

    Abstract

    Schizophrenia and non-right-handedness are moderately associated, and both traits are often accompanied by abnormalities of asymmetrical brain morphology or function. We have found linkage previously of chromosome 2p12-q11 to a quantitative measure of handedness, and we have also found linkage of schizophrenia/schizoaffective disorder to this same chromosomal region in a separate study. Now, we have found that in one of our samples (191 reading-disabled sibling pairs), the relative hand skill of siblings was correlated more strongly with paternal than maternal relative hand skill. This led us to re-analyse 2p12-q11 under parent-of-origin linkage models. We found linkage of relative hand skill in the RD siblings to 2p12-q11 with P=0.0000037 for paternal identity-by-descent sharing, whereas the maternally inherited locus was not linked to the trait (P>0.2). Similarly, in affected-sib-pair analysis of our schizophrenia dataset (241 sibling pairs), we found linkage to schizophrenia for paternal sharing with LOD=4.72, P=0.0000016, within 3 cM of the peak linkage to relative hand skill. Maternal linkage across the region was weak or non-significant. These similar paternal-specific linkages suggest that the causative genetic effects on 2p12-q11 are related. The linkages may be due to a single maternally imprinted influence on lateralized brain development that contains common functional polymorphisms.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2003). A model for knowledge-based pronoun resolution. In F. Detje, D. Dörner, & H. Schaub (Eds.), The logic of cognitive systems (pp. 245-246). Bamberg: Otto-Friedrich Universität.

    Abstract

    Several sources of information are used in choosing the intended referent of an ambiguous pronoun. The two sources considered in this paper are foregrounding and context. The first refers to the accessibility of discourse entities. An entity that is foregrounded is more likely to become the pronoun’s referent than an entity that is not. Context information affects pronoun resolution when world knowledge is needed to find the referent. The model presented here simulates how world knowledge invoked by context, together with foregrounding, influences pronoun resolution. It was developed as an extension to the Distributed Situation Space (DSS) model of knowledge-based inferencing in story comprehension (Frank, Koppen, Noordman, & Vonk, 2003), which shall be introduced first.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2003). Modeling knowledge-based inferences in story comprehension. Cognitive Science, 27(6), 875-910. doi:10.1016/j.cogsci.2003.07.002.

    Abstract

    A computational model of inference during story comprehension is presented, in which story situations are represented distributively as points in a high-dimensional “situation-state space.” This state space organizes itself on the basis of a constructed microworld description. From the same description, causal/temporal world knowledge is extracted. The distributed representation of story situations is more flexible than Golden and Rumelhart’s [Discourse Proc 16 (1993) 203] localist representation. A story taking place in the microworld corresponds to a trajectory through situation-state space. During the inference process, world knowledge is applied to the story trajectory. This results in an adjusted trajectory, reflecting the inference of propositions that are likely to be the case. Although inferences do not result from a search for coherence, they do cause story coherence to increase. The results of simulations correspond to empirical data concerning inference, reading time, and depth of processing. An extension of the model for simulating story retention shows how coherence is preserved during retention without controlling the retention process. Simulation results correspond to empirical data concerning story recall and intrusion.
  • Gaby, A., & Faller, M. (2003). Reciprocity questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 77-80). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877641.

    Abstract

    This project is part of a collaborative project with the research group “Reciprocals across languages” led by Nick Evans. One goal of this project is to develop a typology of reciprocals. This questionnaire is designed to help field workers get an overview over the type of markers used in the expression of reciprocity in the language studied.
  • Galke, L., Ram, Y., & Raviv, L. (2024). Learning pressures and inductive biases in emergent communication: Parallels between humans and deep neural networks. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 197-201). Nijmegen: The Evolution of Language Conferences.
  • Galke, L., Ram, Y., & Raviv, L. (2024). Deep neural networks and humans both benefit from compositional language structure. Nature Communications, 15: 10816. doi:10.1038/s41467-024-55158-1.

    Abstract

    Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.
  • García-Marín, L. M., Campos, A. I., Diaz-Torres, S., Rabinowitz, J. A., Ceja, Z., Mitchell, B. L., Grasby, K. L., Thorp, J. G., Agartz, I., Alhusaini, S., Ames, D., Amouyel, P., Andreassen, O. A., Arfanakis, K., Arias Vasquez, A., Armstrong, N. J., Athanasiu, L., Bastin, M. E., Beiser, A. S., Bennett, D. A. García-Marín, L. M., Campos, A. I., Diaz-Torres, S., Rabinowitz, J. A., Ceja, Z., Mitchell, B. L., Grasby, K. L., Thorp, J. G., Agartz, I., Alhusaini, S., Ames, D., Amouyel, P., Andreassen, O. A., Arfanakis, K., Arias Vasquez, A., Armstrong, N. J., Athanasiu, L., Bastin, M. E., Beiser, A. S., Bennett, D. A., Bis, J. C., Boks, M. P. M., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buitelaar, J. K., Burkhardt, R., Cahn, W., Calhoun, V. D., Carmichael, O. T., Chakravarty, M., Chen, Q., Ching, C. R. K., Cichon, S., Crespo-Facorro, B., Crivello, F., Dale, A. M., Smith, G. D., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. I., Debette, S., DeCarli, C., Depondt, C., Desrivières, S., Djurovic, S., Ehrlich, S., Erk, S., Espeseth, T., Fernández, G., Filippi, I., Fisher, S. E., Fleischman, D. A., Fletcher, E., Fornage, M., Forstner, A. J., Francks, C., Franke, B., Ge, T., Goldman, A. L., Grabe, H. J., Green, R. C., Grimm, O., Groenewold, N. A., Gruber, O., Gudnason, V., Håberg, A. K., Haukvik, U. K., Heinz, A., Hibar, D. P., Hilal, S., Himali, J. J., Ho, B.-C., Hoehn, D. F., Hoekstra, P. J., Hofer, E., Hoffmann, W., Holmes, A. J., Homuth, G., Hosten, N., Ikram, M. K., Ipser, J. C., Jack Jr, C. R., Jahanshad, N., Jönsson, E. G., Kahn, R. S., Kanai, R., Klein, M., Knol, M. J., Launer, L. J., Lawrie, S. M., Le Hellard, S., Lee, P. H., Lemaître, H., Li, S., Liewald, D. C. M., Lin, H., Longstreth Jr, W. T., Lopez, O. L., Luciano, M., Maillard, P., Marquand, A. F., Martin, N. G., Martinot, J.-L., Mather, K. A., Mattay, V. S., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mirza-Schreiber, N., Milaneschi, Y., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Muñoz Maniega, S., Nauck, M., Nho, K., Niessen, W. J., Nöthen, M. M., Nyquist, P. A., Oosterlaan, J., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Psaty, B. M., Pütz, B., Reppermund, S., Rietschel, M. D., Risacher, S. L., Romanczuk-Seiferth, N., Romero-Garcia, R., Roshchupkin, G. V., Rotter, J. I., Sachdev, P. S., Sämann, P. G., Saremi, A., Sargurupremraj, M., Saykin, A. J., Schmaal, L., Schmidt, H., Schmidt, R., Schofield, P. R., Scholz, M., Schumann, G., Schwarz, E., Shen, L., Shin, J., Sisodiya, S. M., Smith, A. V., Smoller, J. W., Soininen, H. S., Steen, V. M., Stein, D. J., Stein, J. L., Thomopoulos, S. I., Toga, A., Tordesillas-Gutiérrez, D. T., Trollor, J. N., Valdes-Hernandez, M. C., Van 't Ent, D., Van Bokhoven, H., Van der Meer, D., Van der Wee, N. J. A., Vázquez-Bourgon, J., Veltman, D. J., Vernooij, M. W., Villringer, A., Vinke, L. N., Völzke, H., Walter, H., Wardlaw, J. M., Weinberger, D. R., Weiner, M. W., Wen, W., Westlye, L. T., Westman, E., White, T., Witte, A. V., Wolf, C., Yang, J., Zwiers, M. P., Ikram, M. A., Seshadri, S., Thompson, P. M., Satizabal, C. L., Medland, S. E., & Rentería, M. E. (2024). Genomic analysis of intracranial and subcortical brain volumes yields polygenic scores accounting for brain variation across ancestries. Nature Genetics, 56, 2333-2344. doi:10.1038/s41588-024-01951-z.

    Abstract

    Subcortical brain structures are involved in developmental, psychiatric and neurological disorders. Here we performed genome-wide association studies meta-analyses of intracranial and nine subcortical brain volumes (brainstem, caudate nucleus, putamen, hippocampus, globus pallidus, thalamus, nucleus accumbens, amygdala and the ventral diencephalon) in 74,898 participants of European ancestry. We identified 254 independent loci associated with these brain volumes, explaining up to 35% of phenotypic variance. We observed gene expression in specific neural cell types across differentiation time points, including genes involved in intracellular signaling and brain aging-related processes. Polygenic scores for brain volumes showed predictive ability when applied to individuals of diverse ancestries. We observed causal genetic effects of brain volumes with Parkinson’s disease and attention-deficit/hyperactivity disorder. Findings implicate specific gene expression patterns in brain development and genetic variants in comorbid neuropsychiatric disorders, which could point to a brain substrate and region of action for risk genes implicated in brain diseases.
  • Ghaleb, E., Rasenberg, M., Pouw, W., Toni, I., Holler, J., Özyürek, A., & Fernandez, R. (2024). Analysing cross-speaker convergence through the lens of automatically detected shared linguistic constructions. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1717-1723).

    Abstract

    Conversation requires a substantial amount of coordination between dialogue participants, from managing turn taking to negotiating mutual understanding. Part of this coordination effort surfaces as the reuse of linguistic behaviour across speakers, a process often referred to as alignment. While the presence of linguistic alignment is well documented in the literature, several questions remain open, including the extent to which patterns of reuse across speakers have an impact on the emergence of labelling conventions for novel referents. In this study, we put forward a methodology for automatically detecting shared lemmatised constructions---expressions with a common lexical core used by both speakers within a dialogue---and apply it to a referential communication corpus where participants aim to identify novel objects for which no established labels exist. Our analyses uncover the usage patterns of shared constructions in interaction and reveal that features such as their frequency and the amount of different constructions used for a referent are associated with the degree of object labelling convergence the participants exhibit after social interaction. More generally, the present study shows that automatically detected shared constructions offer a useful level of analysis to investigate the dynamics of reference negotiation in dialogue.

    Additional information

    link to eScholarship
  • Ghaleb, E., Burenko, I., Rasenberg, M., Pouw, W., Uhrig, P., Holler, J., Toni, I., Ozyurek, A., & Fernandez, R. (2024). Cospeech gesture detection through multi-phase sequence labeling. In Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024) (pp. 4007-4015).

    Abstract

    Gestures are integral components of face-to-face communication. They unfold over time, often following predictable movement phases of preparation, stroke, and re-
    traction. Yet, the prevalent approach to automatic gesture detection treats the problem as binary classification, classifying a segment as either containing a gesture or not, thus failing to capture its inherently sequential and contextual nature. To address this, we introduce a novel framework that reframes the task as a multi-phase sequence labeling problem rather than binary classification. Our model processes sequences of skeletal movements over time windows, uses Transformer encoders to learn contextual embeddings, and leverages Conditional Random Fields to perform sequence labeling. We evaluate our proposal on a large dataset of diverse co-speech gestures in task-oriented face-to-face dialogues. The results consistently demonstrate that our method significantly outperforms strong baseline models in detecting gesture strokes. Furthermore, applying Transformer encoders to learn contextual embeddings from movement sequences substantially improves gesture unit detection. These results highlight our framework’s capacity to capture the fine-grained dynamics of co-speech gesture phases, paving the way for more nuanced and accurate gesture detection and analysis.
  • Ghaleb, E., Khaertdinov, B., Pouw, W., Rasenberg, M., Holler, J., Ozyurek, A., & Fernandez, R. (2024). Learning co-speech gesture representations in dialogue through contrastive learning: An intrinsic evaluation. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI 2024) (pp. 274-283).

    Abstract

    In face-to-face dialogues, the form-meaning relationship of co-speech gestures varies depending on contextual factors such as what the gestures refer to and the individual characteristics of speakers. These factors make co-speech gesture representation learning challenging. How can we learn meaningful gestures representations considering gestures’ variability and relationship with speech? This paper tackles this challenge by employing self-supervised contrastive learning techniques to learn gesture representations from skeletal and speech information. We propose an approach that includes both unimodal and multimodal pre-training to ground gesture representations in co-occurring speech. For training, we utilize a face-to-face dialogue dataset rich with representational iconic gestures. We conduct thorough intrinsic evaluations of the learned representations through comparison with human-annotated pairwise gesture similarity. Moreover, we perform a diagnostic probing analysis to assess the possibility of recovering interpretable gesture features from the learned representations. Our results show a significant positive correlation with human-annotated gesture similarity and reveal that the similarity between the learned representations is consistent with well-motivated patterns related to the dynamics of dialogue interaction. Moreover, our findings demonstrate that several features concerning the form of gestures can be recovered from the latent representations. Overall, this study shows that multimodal contrastive learning is a promising approach for learning gesture representations, which opens the door to using such representations in larger-scale gesture analysis studies.
  • Giglio, L., Ostarek, M., Sharoh, D., & Hagoort, P. (2024). Diverging neural dynamics for syntactic structure building in naturalistic speaking and listening. Proceedings of the National Academy of Sciences of the United States of America, 121(11): e2310766121. doi:10.1073/pnas.2310766121.

    Abstract

    The neural correlates of sentence production have been mostly studied with constraining task paradigms that introduce artificial task effects. In this study, we aimed to gain a better understanding of syntactic processing in spontaneous production vs. naturalistic comprehension. We extracted word-by-word metrics of phrase-structure building with top-down and bottom-up parsers that make different hypotheses about the timing of structure building. In comprehension, structure building proceeded in an integratory fashion and led to an increase in activity in posterior temporal and inferior frontal areas. In production, structure building was anticipatory and predicted an increase in activity in the inferior frontal gyrus. Newly developed production-specific parsers highlighted the anticipatory and incremental nature of structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, the results showed that the unfolding of syntactic processing diverges between speaking and listening.
  • Giglio, L., Sharoh, D., Ostarek, M., & Hagoort, P. (2024). Connectivity of fronto-temporal regions in syntactic structure building during speaking and listening. Neurobiology of Language, 5(4), 922-941. doi:10.1162/nol_a_00154.

    Abstract

    The neural infrastructure for sentence production and comprehension has been found to be mostly shared. The same regions are engaged during speaking and listening, with some differences in how strongly they activate depending on modality. In this study, we investigated how modality affects the connectivity between regions previously found to be involved in syntactic processing across modalities. We determined how constituent size and modality affected the connectivity of the pars triangularis of the left inferior frontal gyrus (LIFG) and of the left posterior temporal lobe (LPTL) with the pars opercularis of the LIFG, the anterior temporal lobe (LATL) and the rest of the brain. We found that constituent size reliably increased the connectivity across these frontal and temporal ROIs. Connectivity between the two LIFG regions and the LPTL was enhanced as a function of constituent size in both modalities, and it was upregulated in production possibly because of linearization and motor planning in the frontal cortex. The connectivity of both ROIs with the LATL was lower and only enhanced for larger constituent sizes, suggesting a contributing role of the LATL in sentence processing in both modalities. These results thus show that the connectivity among fronto-temporal regions is upregulated for syntactic structure building in both sentence production and comprehension, providing further evidence for accounts of shared neural resources for sentence-level processing across modalities.

    Additional information

    supplementary information
  • Giglio, L., Hagoort, P., & Ostarek, M. (2024). Neural encoding of semantic structures during sentence production. Cerebral Cortex, 34(12): bhae482. doi:10.1093/cercor/bhae482.

    Abstract

    The neural representations for compositional processing have so far been mostly studied during sentence comprehension. In an fMRI study of sentence production, we investigated the brain representations for compositional processing during speaking. We used a rapid serial visual presentation sentence recall paradigm to elicit sentence production from the conceptual memory of an event. With voxel-wise encoding models, we probed the specificity of the compositional structure built during the production of each sentence, comparing an unstructured model of word meaning without relational information with a model that encodes abstract thematic relations and a model encoding event-specific relational structure. Whole-brain analyses revealed that sentence meaning at different levels of specificity was encoded in a large left frontal-parietal-temporal network. A comparison with semantic structures composed during the comprehension of the same sentences showed similarly distributed brain activity patterns. An ROI analysis over left fronto-temporal language parcels showed that event-specific relational structure above word-specific information was encoded in the left inferior frontal gyrus. Overall, we found evidence for the encoding of sentence meaning during sentence production in a distributed brain network and for the encoding of event-specific semantic structures in the left inferior frontal gyrus.

    Additional information

    supplementary information
  • Gisselgard, J., Petersson, K. M., Baddeley, A., & Ingvar, M. (2003). The irrelevant speech effect: A PET study. Neuropsychologia, 41, 1899-1911. doi:10.1016/S0028-3932(03)00122-2.

    Abstract

    Positron emission tomography (PET) was performed in normal volunteers during a serial recall task under the influence of irrelevant speech comprising both single item repetition and multi-item sequences. An interaction approach was used to identify brain areas specifically related to the irrelevant speech effect. We interpreted activations as compensatory recruitment of complementary working memory processing, and decreased activity in terms of suppression of task relevant areas invoked by the irrelevant speech. The interaction between the distractors and working memory revealed a significant effect in the left, and to a lesser extent in the right, superior temporal region, indicating that initial phonological processing was relatively suppressed. Additional areas of decreased activity were observed in an a priori defined cortical network related to verbalworking memory, incorporating the bilateral superior temporal and inferior/middle frontal corticesn extending into Broca’s area on the left. We also observed a weak activation in the left inferior parietal cortex, a region suggested to reflect the phonological store, the subcomponent where the interference is assumed to take place. The results suggest that the irrelevant speech effect is correlated with and thus tentatively may be explained in terms of a suppression of components of the verbal working memory network as outlined. The results can be interpreted in terms of inhibitory top–down attentional mechanisms attenuating the influence of the irrelevant speech, although additional studies are clearly necessary to more fully characterize the nature of this phenomenon and its theoretical implications for existing short-term memory models
  • Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.

    Abstract

    * Ole Goltermann and Gökberk Alagöz contributed equally.
    Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution.

    Additional information

    supplementary data link to preprint
  • Goncharova, M. V., Jadoul, Y., Reichmuth, C., Fitch, W. T., & Ravignani, A. (2024). Vocal tract dynamics shape the formant structure of conditioned vocalizations in a harbor seal. Annals of the New York Academy of Sciences, 1538(1), 107-116. doi:10.1111/nyas.15189.

    Abstract

    Formants, or resonance frequencies of the upper vocal tract, are an essential part of acoustic communication. Articulatory gestures—such as jaw, tongue, lip, and soft palate movements—shape formant structure in human vocalizations, but little is known about how nonhuman mammals use those gestures to modify formant frequencies. Here, we report a case study with an adult male harbor seal trained to produce an arbitrary vocalization composed of multiple repetitions of the sound wa. We analyzed jaw movements frame-by-frame and matched them to the tracked formant modulation in the corresponding vocalizations. We found that the jaw opening angle was strongly correlated with the first (F1) and, to a lesser degree, with the second formant (F2). F2 variation was better explained by the jaw angle opening when the seal was lying on his back rather than on the belly, which might derive from soft tissue displacement due to gravity. These results show that harbor seals share some common articulatory traits with humans, where the F1 depends more on the jaw position than F2. We propose further in vivo investigations of seals to further test the role of the tongue on formant modulation in mammalian sound production.
  • González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M. González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M., Moltó, M. D., CIBERSAM group, Buimer, E., Van Haren, N., Cahn, W., O’Donovan, M., Kahn, R. S., Arango, C., Hulshoff Pol, H., Janssen, J., & Schnack, H. (2024). Accelerated cortical thinning in schizophrenia is associated with rare and common predisposing variation to schizophrenia and neurodevelopmental disorders. Biological Psychiatry, 96(5), 376-389. doi:10.1016/j.biopsych.2024.03.011.

    Abstract

    Background

    Schizophrenia is a highly heritable disorder characterized by increased cortical thinning throughout the lifespan. Studies have reported a shared genetic basis between schizophrenia and cortical thickness. However, no genes whose expression is related to abnormal cortical thinning in schizophrenia have been identified.

    Methods

    We conducted linear mixed models to estimate the rates of accelerated cortical thinning across 68 regions from the Desikan-Killiany atlas in individuals with schizophrenia compared to healthy controls from a large longitudinal sample (NCases = 169 and NControls = 298, aged 16-70 years). We studied the correlation between gene expression data from the Allen Human Brain Atlas and accelerated thinning estimates across cortical regions. We finally explored the functional and genetic underpinnings of the genes most contributing to accelerated thinning.

    Results

    We described a global pattern of accelerated cortical thinning in individuals with schizophrenia compared to healthy controls. Genes underexpressed in cortical regions exhibiting this accelerated thinning were downregulated in several psychiatric disorders and were enriched for both common and rare disrupting variation for schizophrenia and neurodevelopmental disorders. In contrast, none of these enrichments were observed for baseline cross-sectional cortical thickness differences.

    Conclusions

    Our findings suggest that accelerated cortical thinning, rather than cortical thickness alone, serves as an informative phenotype for neurodevelopmental disruptions in schizophrenia. We highlight the genetic and transcriptomic correlates of this accelerated cortical thinning, emphasizing the need for future longitudinal studies to elucidate the role of genetic variation and the temporal-spatial dynamics of gene expression in brain development and aging in schizophrenia.

    Additional information

    supplementary materials
  • Gordon, J. K., & Clough, S. (2024). The Flu-ID: A new evidence-based method of assessing fluency in aphasia. American Journal of Speech-Language Pathology, 33, 2972-2990. doi:10.1044/2024_AJSLP-23-00424.

    Abstract

    Purpose:
    Assessing fluency in aphasia is diagnostically important for determining aphasia type and severity and therapeutically important for determining appropriate treatment targets. However, wide variability in the measures and criteria used to assess fluency, as revealed by a recent survey of clinicians (Gordon & Clough, 2022), results in poor reliability. Furthermore, poor specificity in many fluency measures makes it difficult to identify the underlying impairments. Here, we introduce the Flu-ID Aphasia, an evidence-based tool that provides a more informative method of assessing fluency by capturing the range of behaviors that can affect the flow of speech in aphasia.

    Method:
    The development of the Flu-ID was based on prior evidence about factors underlying fluency (Clough & Gordon, 2020; Gordon & Clough, 2020) and clinical perceptions about the measurement of fluency (Gordon & Clough, 2022). Clinical utility is maximized by automated counting of fluency behaviors in an Excel template. Reliability is maximized by outlining thorough guidelines for transcription and coding. Eighteen narrative samples representing a range of fluency were coded independently by the authors to examine the Flu-ID's utility, reliability, and validity.

    Results:
    Overall reliability was very good, with point-to-point agreement of 86% between coders. Ten of the 12 dimensions showed good to excellent reliability. Validity analyses indicated that Flu-ID scores were similar to clinician ratings on some dimensions, but differed on others. Possible reasons and implications of the discrepancies are discussed, along with opportunities for improvement.

    Conclusions:
    The Flu-ID assesses fluency in aphasia using a consistent and comprehensive set of measures and semi-automated procedures to generate individual fluency profiles. The profiles generated in the current study illustrate how similar ratings of fluency can arise from different underlying impairments. Supplemental materials include an analysis template, extensive guidelines for transcription and coding, a completed sample, and a quick reference guide.

    Additional information

    supplemental material
  • De Gregorio, C., Raimondi, T., Bevilacqua, V., Pertosa, C., Valente, D., Carugati, F., Bandoli, F., Favaro, L., Lefaux, B., Ravignani, A., & Gamba, M. (2024). Isochronous singing in 3 crested gibbon species (Nomascusspp.). Current Zoology, 70(3), 291-297. doi:10.1093/cz/zoad029.

    Abstract

    The search for common characteristics between the musical abilities of humans and other animal species is still taking its first steps. One of the most promising aspects from a comparative point of view is the analysis of rhythmic components, which are crucial features of human communicative performance but also well-identifiable patterns in the vocal displays of other species. Therefore, the study of rhythm is becoming essential to understand the mechanisms of singing behavior and the evolution of human communication. Recent findings provided evidence that particular rhythmic structures occur in human music and some singing animal species, such as birds and rock hyraxes, but only 2 species of nonhuman primates have been investigated so far (Indri indri and Hylobates lar). Therefore, our study aims to consistently broaden the list of species studied regarding the presence of rhythmic categories. We investigated the temporal organization in the singing of 3 species of crested gibbons (Nomascus gabriellae, Nomascus leucogenys, and Nomascus siki) and found that the most prominent rhythmic category was isochrony. Moreover, we found slight variation in songs’ tempo among species, with N. gabriellae and N. siki singing with a temporal pattern involving a gradually increasing tempo (a musical accelerando), and N. leucogenys with a more regular pattern. Here, we show how the prominence of a peak at the isochrony establishes itself as a shared characteristic in the small apes considered so far.
  • De Gregorio, C., Maiolini, M., Raimondi, T., Carugati, F., Miaretsoa, L., Valente, D., Torti, V., Giacoma, C., Ravignani, A., & Gamba, M. (2024). Isochrony as ancestral condition to call and song in a primate. Annals of the New York Academy of Sciences, 1537(1), 41-50. doi:10.1111/nyas.15151.

    Abstract

    Animal songs differ from calls in function and structure, and have comparative and translational value, showing similarities to human music. Rhythm in music is often distributed in quantized classes of intervals known as rhythmic categories. These classes have been found in the songs of a few nonhuman species but never in their calls. Are rhythmic categories song-specific, as in human music, or can they transcend the song–call boundary? We analyze the vocal displays of one of the few mammals producing both songs and call sequences: Indri indri. We test whether rhythmic categories (a) are conserved across songs produced in different contexts, (b) exist in call sequences, and (c) differ between songs and call sequences. We show that rhythmic categories occur across vocal displays. Vocalization type and function modulate deployment of categories. We find isochrony (1:1 ratio, like the rhythm of a ticking clock) in all song types, but only advertisement songs show three rhythmic categories (1:1, 1:2, 2:1 ratios). Like songs, some call types are also isochronous. Isochrony is the backbone of most indri vocalizations, unlike human speech, where it is rare. In indri, isochrony underlies both songs and hierarchy-less call sequences and might be ancestral to both.

    Additional information

    tables
  • Gretsch, P. (2003). Omission impossible?: Topic and Focus in Focal Ellipsis. In K. Schwabe, & S. Winkler (Eds.), The Interfaces: Deriving and interpreting omitted structures (pp. 341-365). Amsterdam: John Benjamins.
  • Grönberg, D. J., Pinto de Carvalho, S. L., Dernerova, N., Norton, P., Wong, M. M. K., & Mendoza, E. (2024). Expression and regulation of SETBP1 in the song system of male zebra finches (Taeniopygia guttata) during singing. Scientific Reports, 14: 29057. doi:10.1038/s41598-024-75353-w.

    Abstract

    Rare de novo heterozygous loss-of-function SETBP1 variants lead to a neurodevelopmental disorder characterized by speech deficits, indicating a potential involvement of SETBP1 in human speech. However, the expression pattern of SETBP1 in brain regions associated with vocal learning remains poorly understood, along with the underlying molecular mechanisms linking it to vocal production. In this study, we examined SETBP1 expression in the brain of male zebra finches, a well-established model for studying vocal production learning. We demonstrated that zebra finch SETBP1 exhibits a greater number of exons and isoforms compared to its human counterpart. We characterized a SETBP1 antibody and showed that SETBP1 colocalized with FoxP1, FoxP2, and Parvalbumin in key song nuclei. Moreover, SETBP1 expression in neurons in Area X is significantly higher in zebra finches singing alone, than those singing courtship song to a female, or non-singers. Importantly, we found a distinctive neuronal protein expression of SETBP1 and FoxP2 in Area X only in zebra finches singing alone, but not in the other conditions. We demonstrated SETBP1´s regulatory role on FoxP2 promoter activity in vitro. Taken together, these findings provide compelling evidence for SETBP1 expression in brain regions to be crucial for vocal learning and its modulation by singing behavior.

    Additional information

    supplementary material
  • Grosseck, O., Perlman, M., Ortega, G., & Raviv, L. (2024). The iconic affordances of gesture and vocalization in emerging languages in the lab. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 223-225). Nijmegen: The Evolution of Language Conferences.
  • Le Guen, O. (2003). Quand les morts reviennent, réflexion sur l'ancestralité chez les Mayas des Basses Terres. Journal de la Société des Américanistes, 89(2), 171-205.

    Abstract

    When the dead come home… Remarks on ancestor worship among the Lowland Mayas. In Amerindian ethnographical literature, ancestor worship is often mentioned but evidence of its existence is lacking. This article will try to demonstrate that some Lowland Maya do worship ancestors ; it will use precise criteria taken from ethnological studies of societies where ancestor worship is common, compared to maya beliefs and practices. The All Souls’ Day, or hanal pixan, seems to be the most significant manifestation of this cult. Our approach will be comparative, through time – using colonial and ethnographical data of the twentieth century, and space – contemplating uses and beliefs of two maya groups, the Yucatec and the Lacandon Maya.
  • Gullberg, M. (2003). Eye movements and gestures in human face-to-face interaction. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eyes: Cognitive and applied aspects of eye movements (pp. 685-703). Oxford: Elsevier.

    Abstract

    Gestures are visuospatial events, meaning carriers, and social interactional phenomena. As such they constitute a particularly favourable area for investigating visual attention in a complex everyday situation under conditions of competitive processing. This chapter discusses visual attention to spontaneous gestures in human face-to-face interaction as explored with eye-tracking. Some basic fixation patterns are described, live and video-based settings are compared, and preliminary results on the relationship between fixations and information processing are outlined.
  • Gullberg, M., & Kita, S. (2003). Das Beachten von Gesten: Eine Studie zu Blickverhalten und Integration gestisch ausgedrückter Informationen. In Max-Planck-Gesellschaft (Ed.), Jahrbuch der Max Planck Gesellschaft 2003 (pp. 949-953). Göttingen: Vandenhoeck & Ruprecht.
  • Gullberg, M. (2003). Gestures, referents, and anaphoric linkage in learner varieties. In C. Dimroth, & M. Starren (Eds.), Information structure, linguistic structure and the dynamics of language acquisition. (pp. 311-328). Amsterdam: Benjamins.

    Abstract

    This paper discusses how the gestural modality can contribute to our understanding of anaphoric linkage in learner varieties, focusing on gestural anaphoric linkage marking the introduction, maintenance, and shift of reference in story retellings by learners of French and Swedish. The comparison of gestural anaphoric linkage in native and non-native varieties reveals what appears to be a particular learner variety of gestural cohesion, which closely reflects the characteristics of anaphoric linkage in learners' speech. Specifically, particular forms co-occur with anaphoric gestures depending on the information organisation in discourse. The typical nominal over-marking of maintained referents or topic elements in speech is mirrored by gestural (over-)marking of the same items. The paper discusses two ways in which this finding may further the understanding of anaphoric over-explicitness of learner varieties. An addressee-based communicative perspective on anaphoric linkage highlights how over-marking in gesture and speech may be related to issues of hyper-clarity and ambiguity. An alternative speaker-based perspective is also explored in which anaphoric over-marking is seen as related to L2 speech planning.
  • Guzmán Chacón, E., Ovando-Tellez, M., Thiebaut de Schotten, M., & Forkel, S. J. (2024). Embracing digital innovation in neuroscience: 2023 in review at NEUROCCINO. Brain Structure & Function, 229, 251-255. doi:10.1007/s00429-024-02768-6.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (2003). De verloving tussen neurowetenschap en psychologie. In K. Hilberdink (Ed.), Interdisciplinariteit in de geesteswetenschappen (pp. 73-81). Amsterdam: KNAW.
  • Hagoort, P. (2003). Die einzigartige, grösstenteils aber unbewusste Fähigkeit der Menschen zu sprachlicher Kommunikation. In G. Kaiser (Ed.), Jahrbuch 2002-2003 / Wissenschaftszentrum Nordrhein-Westfalen (pp. 33-46). Düsseldorf: Wissenschaftszentrum Nordrhein-Westfalen.
  • Hagoort, P. (2003). Functional brain imaging. In W. J. Frawley (Ed.), International encyclopedia of linguistics (pp. 142-145). New York: Oxford University Press.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.

    Abstract

    Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
  • Hartmann, S., Wacewicz, S., Ravignani, A., Valente, D., Rodrigues, E. D., Asano, R., & Jadoul, Y. (2024). Delineating the field of language evolution research: A quantitative analysis of peer-review patterns at the Joint Conference on Language Evolution (JCoLE 2022). Interaction studies, 25(1), 100-117. doi:10.1075/is.00024.har.

    Abstract

    Research on language evolution is an established subject area yet permeated by terminological controversies about which topics should be considered pertinent to the field and which not. By consequence, scholars focusing on language evolution struggle in providing precise demarcations of the discipline, where even the very central notions of evolution and language are elusive. We aimed at providing a data-driven characterisation of language evolution as a field of research by relying on quantitative analysis of data drawn from 697 reviews on 255 submissions from the Joint Conference on Language Evolution 2022 (Kanazawa, Japan). Our results delineate a field characterized by a core of main research topics such as iconicity, sign language, multimodality. Despite being explored within the framework of language evolution research, only very recently these topics became popular in linguistics. As a result, language evolution has the potential to emerge as a forefront of linguistic research, bringing innovation to the study of language. We also see the emergence of more recent topics like rhythm, music, and vocal learning. Furthermore, the community identifies cognitive science, primatology, archaeology, palaeoanthropology, and genetics as key areas, encouraging empirical rather than theoretical work. With new themes, models, and methodologies emerging, our results depict an intrinsically multidisciplinary and evolving research field, likely adapting as language itself.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Waller, D. (2003). Alignment task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 39-48). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haun, D. B. M. (2003). Path integration. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 33-38). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877644.
  • Haun, D. B. M. (2003). Spatial updating. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 49-56). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Hegemann, L., Corfield, E. C., Askelund, A. D., Allegrini, A. G., Askeland, R. B., Ronald, A., Ask, H., St Pourcain, B., Andreassen, O. A., Hannigan, L. J., & Havdahl, A. (2024). Genetic and phenotypic heterogeneity in early neurodevelopmental traits in the Norwegian Mother, Father and Child Cohort Study. Molecular Autism, 15: 25. doi:10.1186/s13229-024-00599-0.

    Abstract

    Background
    Autism and different neurodevelopmental conditions frequently co-occur, as do their symptoms at sub-diagnostic threshold levels. Overlapping traits and shared genetic liability are potential explanations.

    Methods
    In the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), we leverage item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41,708–58,630) using maternal reports on 76 items assessing children’s motor and language development, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests.

    Results
    We identified 11 latent factors at the phenotypic level. These factors showed associations with diagnoses of autism and other neurodevelopmental conditions. Most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (items rg range = − 0.27–0.78), ADHD (items rg range = − 0.40–1), and schizophrenia (items rg range = − 0.24–0.34). We find little evidence of common genetic liability across all neurodevelopmental traits but more so for several genetic factors across more specific areas of neurodevelopment, particularly social and communication traits. Some of these factors, such as one capturing prosocial behavior, overlap with factors found in the phenotypic analyses. Other areas, such as motor development, seemed to have more heterogenous etiology, with specific traits showing a less consistent pattern of genetic correlations with each other.

    Conclusions
    These exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits and their relation to autism and other neurodevelopmental conditions. This in turn could have implications for clinical screening tools and programs.
  • Heim, F., Scharff, C., Fisher, S. E., Riebel, K., & Ten Cate, C. (2024). Auditory discrimination learning and acoustic cue weighing in female zebra finches with localized FoxP1 knockdowns. Journal of Neurophysiology, 131, 950-963. doi:10.1152/jn.00228.2023.

    Abstract

    Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.

    NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition.
  • Hellwig, B. (2003). The grammatical coding of postural semantics in Goemai (a West Chadic language of Nigeria). PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58463.
  • Hersh, T. A., Ravignani, A., & Whitehead, H. (2024). Cetaceans are the next frontier for vocal rhythm research. Proceedings of the National Academy of Sciences of the United States of America, 121(25): e2313093121. doi:10.1073/pnas.2313093121.

    Abstract

    While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans—whales, dolphins, and porpoises—are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother–infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.

    Additional information

    supporting information
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.

    Abstract

    Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

    Additional information

    network analysis of dataset A and B
  • Hintz, F., & Meyer, A. S. (Eds.). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1).
  • Hintz, F., Voeten, C. C., Dobó, D., Lukics, K. S., & Lukács, Á. (2024). The role of general cognitive skills in integrating visual and linguistic information during sentence comprehension: Individual differences across the lifespan. Scientific Reports, 14: 17797. doi:10.1038/s41598-024-68674-3.

    Abstract

    Individuals exhibit massive variability in general cognitive skills that affect language processing. This variability is partly developmental. Here, we recruited a large sample of participants (N = 487), ranging from 9 to 90 years of age, and examined the involvement of nonverbal processing speed (assessed using visual and auditory reaction time tasks) and working memory (assessed using forward and backward Digit Span tasks) in a visual world task. Participants saw two objects on the screen and heard a sentence that referred to one of them. In half of the sentences, the target object could be predicted based on verb-selectional restrictions. We observed evidence for anticipatory processing on predictable compared to non-predictable trials. Visual and auditory processing speed had main effects on sentence comprehension and facilitated predictive processing, as evidenced by an interaction. We observed only weak evidence for the involvement of working memory in predictive sentence comprehension. Age had a nonlinear main effect (younger adults responded faster than children and older adults), but it did not differentially modulate predictive and non-predictive processing, nor did it modulate the involvement of processing speed and working memory. Our results contribute to delineating the cognitive skills that are involved in language-vision interactions.

    Additional information

    supplementary information
  • Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.

    Abstract

    We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
    provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
    description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Hope, T. M. H., Neville, D., Talozzi, L., Foulon, C., Forkel, S. J., Thiebaut de Schotten, M., & Price, C. J. (2024). Testing the disconnectome symptom discoverer model on out-of-sample post-stroke language outcomes. Brain, 147(2), e11-e13. doi:10.1093/brain/awad352.

    Abstract

    Stroke is common, and its consequent brain damage can cause various cognitive impairments. Associations between where and how much brain lesion damage a patient has suffered, and the particular impairments that injury has caused (lesion-symptom associations) offer potentially compelling insights into how the brain implements cognition.1 A better understanding of those associations can also fill a gap in current stroke medicine by helping us to predict how individual patients might recover from post-stroke impairments.2 Most recent work in this area employs machine learning models trained with data from stroke patients whose mid-to-long-term outcomes are known.2-4 These machine learning models are tested by predicting new outcomes—typically scores on standardized tests of post-stroke impairment—for patients whose data were not used to train the model. Traditionally, these validation results have been shared in peer-reviewed publications describing the model and its training. But recently, and for the first time in this field (as far as we know), one of these pre-trained models has been made public—The Disconnectome Symptom Discoverer model (DSD) which draws its predictors from structural disconnection information inferred from stroke patients’ brain MRI.5

    Here, we test the DSD model on wholly independent data, never seen by the model authors, before they published it. Specifically, we test whether its predictive performance is just as accurate as (i.e. not significantly worse than) that reported in the original (Washington University) dataset, when predicting new patients’ outcomes at a similar time post-stroke (∼1 year post-stroke) and also in another independent sample tested later (5+ years) post-stroke. A failure to generalize the DSD model occurs if it performs significantly better in the Washington data than in our data from patients tested at a similar time point (∼1 year post-stroke). In addition, a significant decrease in predictive performance for the more chronic sample would be evidence that lesion-symptom associations differ at ∼1 year post-stroke and >5 years post-stroke.
  • Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2024). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research, 67, 4015-4024. doi:10.1044/2023_JSLHR-23-00081.

    Abstract

    Purpose:
    To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.

    Method:
    Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.

    Results:
    There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.

    Conclusions:
    Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.

    Abstract

    In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing.
  • Huettig, F., & Christiansen, M. H. (2024). Can large language models counter the recent decline in literacy levels? An important role for cognitive science. Cognitive Science, 48(8): e13487. doi:10.1111/cogs.13487.

    Abstract

    Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately US-based tech companies.
  • Jadoul, Y., De Boer, B., & Ravignani, A. (2024). Parselmouth for bioacoustics: Automated acoustic analysis in Python. Bioacoustics, 33(1), 1-19. doi:10.1080/09524622.2023.2259327.

    Abstract

    Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician’s workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth.
  • Janse, E. (2003). Word perception in natural-fast and artificially time-compressed speech. In M. SolÉ, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of the Phonetic Sciences (pp. 3001-3004).
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.

    Abstract

    To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience.
  • Jara-Ettinger, J., & Rubio-Fernandez, P. (2024). Demonstratives as attention tools: Evidence of mentalistic representations in language. Proceedings of the National Academy of Sciences of the United States of America, 121(32): e2402068121. doi:10.1073/pnas.2402068121.

    Abstract

    Linguistic communication is an intrinsically social activity that enables us to share thoughts across minds. Many complex social uses of language can be captured by domain-general representations of other minds (i.e., mentalistic representations) that externally modulate linguistic meaning through Gricean reasoning. However, here we show that representations of others’ attention are embedded within language itself. Across ten languages, we show that demonstratives—basic grammatical words (e.g.,“this”/“that”) which are evolutionarily ancient, learned early in life, and documented in all known languages—are intrinsic attention tools. Beyond their spatial meanings, demonstratives encode both joint attention and the direction in which the listenermmust turn to establish it. Crucially, the frequency of the spatial and attentional uses of demonstratives varies across languages, suggesting that both spatial and mentalistic representations are part of their conventional meaning. Using computational modeling, we show that mentalistic representations of others’ attention are internally encoded in demonstratives, with their effect further boosted by Gricean reasoning. Yet, speakers are largely unaware of this, incorrectly reporting that they primarily capture spatial representations. Our findings show that representations of other people’s cognitive states (namely, their attention) are embedded in language and suggest that the most basic building blocks of the linguistic system crucially rely on social cognition.

    Additional information

    pnas.2402068121.sapp.pdf
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Johnson, E. K. (2003). Speaker intent influences infants' segmentation of potentially ambiguous utterances. In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 1995-1998). Adelaide: Causal Productions.

Share this page