Displaying 201 - 300 of 683
-
Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2010). Electroencephalographic responses to SMS shortcuts. Brain Research, 1348, 120-127. doi:10.1016/j.brainres.2010.06.026.
Abstract
As the popularity of sending messages electronically increases, so does the necessity of conveying messages more efficiently. One way of increasing efficiency is to abbreviate words and expressions by combining letters with numbers such as gr8 for “great,” using acronyms, such as lol for “laughing out loud,” or clippings such as msg for “message.” The present study compares the processing of shortcuts to the processing of closely matched pseudo-shortcuts. ERPs were recorded while participants were performing a lexical decision task. Response times showed that shortcuts were categorized more slowly as nonwords than pseudo-shortcuts. The ERP results showed no differences between shortcuts and pseudo-shortcuts at time windows 50–150 ms and 150–270 ms, but there were significant differences between 270 and 500 ms. These results suggest that at early stages of word recognition, the orthographic and phonological processing is similar for shortcuts and pseudo-shortcuts. However, at the time of lexical access, shortcuts diverge from pseudo-shortcuts, suggesting that shortcuts activate stored lexical representations. -
Ganushchak, L. Y., & Schiller, N. O. (2010). Detection of speech errors in the speech of others: An ERP study. NeuroImage, 49, 3331-3337. doi:10.1016/j.neuroimage.2009.11.063.
Abstract
The current event-related brain potential study examined the processing of observed speech errors.
Participants were asked to detect errors in the speech of others while listening to the description of a visual
network. Networks consisted of colored drawings of objects connected by straight or curved lines. We
investigated the processing of two types of errors in the network descriptions, i.e., incorrect color and errors
in determiners usage (gender agreement violations). In the 100- to 300-ms and 300- to 550-ms time
windows, we found larger PMN and N400 amplitudes for both color and determiner error trials compared to
correct trials. Furthermore, color but not determiner errors led to larger P600 amplitudes compared to
correct color trials. Color errors also showed enhanced P600 amplitudes compared to determiner errors.
Taken together, processing erroneous network descriptions elicits different brain potentials than listening to
the corresponding correct utterances. Hence, speech is monitored for errors not only during speech
production but also during listening to the naturally occurring speech of others. -
Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2010). Is it a letter? Is it a number? Processing of numbers within SMS shortcuts. Psychonomic Bulletin & Review, 17, 101-105. doi:10.3758/PBR.17.1.101.
Abstract
For efficiency reasons, words in electronic messages are sometimes formed by combining letters with numbers, as in gr8 for “great.” The aim of this study was to investigate whether a digit incorporated into a letter—digit shortcut would retain its numerosity. A priming paradigm was used with letter—digit shortcuts (e.g., gr8) and matched pseudoshortcuts (e.g., qr8) as primes. The primes were presented simultaneously with sets of dots (targets) for which even/odd decisions were required, or they appeared 250 msec before target onset. When pseudoshortcuts were presented, decision latencies were shorter when the target and the digit in the prime were matched in parity than when they were mismatched. This main effect of match was not significant for shortcuts. The results suggest that the number concepts of digits combined with letters become activated but are quickly suppressed or deactivated when the digit is part of an existing shortcut. -
García-Marín, L. M., Campos, A. I., Diaz-Torres, S., Rabinowitz, J. A., Ceja, Z., Mitchell, B. L., Grasby, K. L., Thorp, J. G., Agartz, I., Alhusaini, S., Ames, D., Amouyel, P., Andreassen, O. A., Arfanakis, K., Arias Vasquez, A., Armstrong, N. J., Athanasiu, L., Bastin, M. E., Beiser, A. S., Bennett, D. A. García-Marín, L. M., Campos, A. I., Diaz-Torres, S., Rabinowitz, J. A., Ceja, Z., Mitchell, B. L., Grasby, K. L., Thorp, J. G., Agartz, I., Alhusaini, S., Ames, D., Amouyel, P., Andreassen, O. A., Arfanakis, K., Arias Vasquez, A., Armstrong, N. J., Athanasiu, L., Bastin, M. E., Beiser, A. S., Bennett, D. A., Bis, J. C., Boks, M. P. M., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buitelaar, J. K., Burkhardt, R., Cahn, W., Calhoun, V. D., Carmichael, O. T., Chakravarty, M., Chen, Q., Ching, C. R. K., Cichon, S., Crespo-Facorro, B., Crivello, F., Dale, A. M., Smith, G. D., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. I., Debette, S., DeCarli, C., Depondt, C., Desrivières, S., Djurovic, S., Ehrlich, S., Erk, S., Espeseth, T., Fernández, G., Filippi, I., Fisher, S. E., Fleischman, D. A., Fletcher, E., Fornage, M., Forstner, A. J., Francks, C., Franke, B., Ge, T., Goldman, A. L., Grabe, H. J., Green, R. C., Grimm, O., Groenewold, N. A., Gruber, O., Gudnason, V., Håberg, A. K., Haukvik, U. K., Heinz, A., Hibar, D. P., Hilal, S., Himali, J. J., Ho, B.-C., Hoehn, D. F., Hoekstra, P. J., Hofer, E., Hoffmann, W., Holmes, A. J., Homuth, G., Hosten, N., Ikram, M. K., Ipser, J. C., Jack Jr, C. R., Jahanshad, N., Jönsson, E. G., Kahn, R. S., Kanai, R., Klein, M., Knol, M. J., Launer, L. J., Lawrie, S. M., Le Hellard, S., Lee, P. H., Lemaître, H., Li, S., Liewald, D. C. M., Lin, H., Longstreth Jr, W. T., Lopez, O. L., Luciano, M., Maillard, P., Marquand, A. F., Martin, N. G., Martinot, J.-L., Mather, K. A., Mattay, V. S., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mirza-Schreiber, N., Milaneschi, Y., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Muñoz Maniega, S., Nauck, M., Nho, K., Niessen, W. J., Nöthen, M. M., Nyquist, P. A., Oosterlaan, J., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Psaty, B. M., Pütz, B., Reppermund, S., Rietschel, M. D., Risacher, S. L., Romanczuk-Seiferth, N., Romero-Garcia, R., Roshchupkin, G. V., Rotter, J. I., Sachdev, P. S., Sämann, P. G., Saremi, A., Sargurupremraj, M., Saykin, A. J., Schmaal, L., Schmidt, H., Schmidt, R., Schofield, P. R., Scholz, M., Schumann, G., Schwarz, E., Shen, L., Shin, J., Sisodiya, S. M., Smith, A. V., Smoller, J. W., Soininen, H. S., Steen, V. M., Stein, D. J., Stein, J. L., Thomopoulos, S. I., Toga, A., Tordesillas-Gutiérrez, D. T., Trollor, J. N., Valdes-Hernandez, M. C., Van 't Ent, D., Van Bokhoven, H., Van der Meer, D., Van der Wee, N. J. A., Vázquez-Bourgon, J., Veltman, D. J., Vernooij, M. W., Villringer, A., Vinke, L. N., Völzke, H., Walter, H., Wardlaw, J. M., Weinberger, D. R., Weiner, M. W., Wen, W., Westlye, L. T., Westman, E., White, T., Witte, A. V., Wolf, C., Yang, J., Zwiers, M. P., Ikram, M. A., Seshadri, S., Thompson, P. M., Satizabal, C. L., Medland, S. E., & Rentería, M. E. (2024). Genomic analysis of intracranial and subcortical brain volumes yields polygenic scores accounting for brain variation across ancestries. Nature Genetics, 56, 2333-2344. doi:10.1038/s41588-024-01951-z.
Abstract
Subcortical brain structures are involved in developmental, psychiatric and neurological disorders. Here we performed genome-wide association studies meta-analyses of intracranial and nine subcortical brain volumes (brainstem, caudate nucleus, putamen, hippocampus, globus pallidus, thalamus, nucleus accumbens, amygdala and the ventral diencephalon) in 74,898 participants of European ancestry. We identified 254 independent loci associated with these brain volumes, explaining up to 35% of phenotypic variance. We observed gene expression in specific neural cell types across differentiation time points, including genes involved in intracellular signaling and brain aging-related processes. Polygenic scores for brain volumes showed predictive ability when applied to individuals of diverse ancestries. We observed causal genetic effects of brain volumes with Parkinson’s disease and attention-deficit/hyperactivity disorder. Findings implicate specific gene expression patterns in brain development and genetic variants in comorbid neuropsychiatric disorders, which could point to a brain substrate and region of action for risk genes implicated in brain diseases. -
Gaub, S., Groszer, M., Fisher, S. E., & Ehret, G. (2010). The structure of innate vocalizations in Foxp2-deficient mouse pups. Genes, Brain and Behavior, 9, 390-401. doi:10.1111/j.1601-183X.2010.00570.x.
Abstract
Heterozygous mutations of the human FOXP2 gene are implicated in a severe speech and language disorder. Aetiological mutations of murine Foxp2 yield abnormal synaptic plasticity and impaired motor-skill learning in mutant mice, while knockdown of the avian orthologue in songbirds interferes with auditory-guided vocal learning. Here, we investigate influences of two distinct Foxp2 point mutations on vocalizations of 4-day-old mouse pups (Mus musculus). The R552H missense mutation is identical to that causing speech and language deficits in a large well-studied human family, while the S321X nonsense mutation represents a null allele that does not produce Foxp2 protein. We ask whether vocalizations, based solely on innate mechanisms of production, are affected by these alternative Foxp2 mutations. Sound recordings were taken in two different situations: isolation and distress, eliciting a range of call types, including broadband vocalizations of varying noise content, ultrasonic whistles and clicks. Sound production rates and several acoustic parameters showed that, despite absence of functional Foxp2, homozygous mutants could vocalize all types of sounds in a normal temporal pattern, but only at comparably low intensities. We suggest that altered vocal output of these homozygotes may be secondary to developmental delays and somatic weakness. Heterozygous mutants did not differ from wild-types in any of the measures that we studied (R552H ) or in only a few (S321X ), which were in the range of differences routinely observed for different mouse strains. Thus, Foxp2 is not essential for the innate production of emotional vocalizations with largely normal acoustic properties by mouse pups. -
Gebre, B. G. (2010). Part of speech tagging for Amharic. Master Thesis, University of Wolverhampton, Wolverhampton.
-
Geurts, H. M., Broeders, m., & Nieuwland, M. S. (2010). Thinking outside the executive functions box: Theory of mind and pragmatic abilities in attention deficit/hyperactivity disorder. European Journal of Developmental Psychology, 7(1), 135-151. doi:10.1080/17405620902906965.
Abstract
An endophenotype for attention deficit/hyperactivity disorder (AD/HD) is executive functioning. In the autism and developmental literature executive dysfunctions has also been linked to theory of mind (ToM) and pragmatic language use. The central question of this review is whether deficits in ToM and pragmatic language use are common in AD/HD. AD/HD seems to be associated with pragmatic deficits, but not with ToM deficits. In this review we address how this pattern of findings might facilitate the understanding of the commonalities and differences between executive functioning, ToM, and pragmatic abilities. Based on the reviewed studies we conclude that ToM is not likely to be a potential endophenotype for AD/HD, while it is too early to draw such a conclusion for pragmatic language use. -
Ghaleb, E., Rasenberg, M., Pouw, W., Toni, I., Holler, J., Özyürek, A., & Fernandez, R. (2024). Analysing cross-speaker convergence through the lens of automatically detected shared linguistic constructions. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1717-1723).Abstract
Conversation requires a substantial amount of coordination between dialogue participants, from managing turn taking to negotiating mutual understanding. Part of this coordination effort surfaces as the reuse of linguistic behaviour across speakers, a process often referred to as alignment. While the presence of linguistic alignment is well documented in the literature, several questions remain open, including the extent to which patterns of reuse across speakers have an impact on the emergence of labelling conventions for novel referents. In this study, we put forward a methodology for automatically detecting shared lemmatised constructions---expressions with a common lexical core used by both speakers within a dialogue---and apply it to a referential communication corpus where participants aim to identify novel objects for which no established labels exist. Our analyses uncover the usage patterns of shared constructions in interaction and reveal that features such as their frequency and the amount of different constructions used for a referent are associated with the degree of object labelling convergence the participants exhibit after social interaction. More generally, the present study shows that automatically detected shared constructions offer a useful level of analysis to investigate the dynamics of reference negotiation in dialogue.Additional information
link to eScholarship -
Ghaleb, E., Burenko, I., Rasenberg, M., Pouw, W., Uhrig, P., Holler, J., Toni, I., Ozyurek, A., & Fernandez, R. (2024). Cospeech gesture detection through multi-phase sequence labeling. In Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024) (pp. 4007-4015).
Abstract
Gestures are integral components of face-to-face communication. They unfold over time, often following predictable movement phases of preparation, stroke, and re-
traction. Yet, the prevalent approach to automatic gesture detection treats the problem as binary classification, classifying a segment as either containing a gesture or not, thus failing to capture its inherently sequential and contextual nature. To address this, we introduce a novel framework that reframes the task as a multi-phase sequence labeling problem rather than binary classification. Our model processes sequences of skeletal movements over time windows, uses Transformer encoders to learn contextual embeddings, and leverages Conditional Random Fields to perform sequence labeling. We evaluate our proposal on a large dataset of diverse co-speech gestures in task-oriented face-to-face dialogues. The results consistently demonstrate that our method significantly outperforms strong baseline models in detecting gesture strokes. Furthermore, applying Transformer encoders to learn contextual embeddings from movement sequences substantially improves gesture unit detection. These results highlight our framework’s capacity to capture the fine-grained dynamics of co-speech gesture phases, paving the way for more nuanced and accurate gesture detection and analysis. -
Giglio, L., Ostarek, M., Sharoh, D., & Hagoort, P. (2024). Diverging neural dynamics for syntactic structure building in naturalistic speaking and listening. Proceedings of the National Academy of Sciences of the United States of America, 121(11): e2310766121. doi:10.1073/pnas.2310766121.
Abstract
The neural correlates of sentence production have been mostly studied with constraining task paradigms that introduce artificial task effects. In this study, we aimed to gain a better understanding of syntactic processing in spontaneous production vs. naturalistic comprehension. We extracted word-by-word metrics of phrase-structure building with top-down and bottom-up parsers that make different hypotheses about the timing of structure building. In comprehension, structure building proceeded in an integratory fashion and led to an increase in activity in posterior temporal and inferior frontal areas. In production, structure building was anticipatory and predicted an increase in activity in the inferior frontal gyrus. Newly developed production-specific parsers highlighted the anticipatory and incremental nature of structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, the results showed that the unfolding of syntactic processing diverges between speaking and listening. -
Giglio, L., Sharoh, D., Ostarek, M., & Hagoort, P. (2024). Connectivity of fronto-temporal regions in syntactic structure building during speaking and listening. Neurobiology of Language, 5(4), 922-941. doi:10.1162/nol_a_00154.
Abstract
The neural infrastructure for sentence production and comprehension has been found to be mostly shared. The same regions are engaged during speaking and listening, with some differences in how strongly they activate depending on modality. In this study, we investigated how modality affects the connectivity between regions previously found to be involved in syntactic processing across modalities. We determined how constituent size and modality affected the connectivity of the pars triangularis of the left inferior frontal gyrus (LIFG) and of the left posterior temporal lobe (LPTL) with the pars opercularis of the LIFG, the anterior temporal lobe (LATL) and the rest of the brain. We found that constituent size reliably increased the connectivity across these frontal and temporal ROIs. Connectivity between the two LIFG regions and the LPTL was enhanced as a function of constituent size in both modalities, and it was upregulated in production possibly because of linearization and motor planning in the frontal cortex. The connectivity of both ROIs with the LATL was lower and only enhanced for larger constituent sizes, suggesting a contributing role of the LATL in sentence processing in both modalities. These results thus show that the connectivity among fronto-temporal regions is upregulated for syntactic structure building in both sentence production and comprehension, providing further evidence for accounts of shared neural resources for sentence-level processing across modalities.Additional information
supplementary information -
Giglio, L., Hagoort, P., & Ostarek, M. (2024). Neural encoding of semantic structures during sentence production. Cerebral Cortex, 34(12): bhae482. doi:10.1093/cercor/bhae482.
Abstract
The neural representations for compositional processing have so far been mostly studied during sentence comprehension. In an fMRI study of sentence production, we investigated the brain representations for compositional processing during speaking. We used a rapid serial visual presentation sentence recall paradigm to elicit sentence production from the conceptual memory of an event. With voxel-wise encoding models, we probed the specificity of the compositional structure built during the production of each sentence, comparing an unstructured model of word meaning without relational information with a model that encodes abstract thematic relations and a model encoding event-specific relational structure. Whole-brain analyses revealed that sentence meaning at different levels of specificity was encoded in a large left frontal-parietal-temporal network. A comparison with semantic structures composed during the comprehension of the same sentences showed similarly distributed brain activity patterns. An ROI analysis over left fronto-temporal language parcels showed that event-specific relational structure above word-specific information was encoded in the left inferior frontal gyrus. Overall, we found evidence for the encoding of sentence meaning during sentence production in a distributed brain network and for the encoding of event-specific semantic structures in the left inferior frontal gyrus.Additional information
supplementary information -
Glaser, B., Ades, A. E., Lewis, S., Emmet, P., Lewis, G., Smith, G. D., & Zammit, S. (2010). Perinatal folate-related exposures and risk of psychotic symptoms in the ALSPAC birth cohort. Schizophrenia Research, 120, 177-183. doi:10.1016/j.schres.2010.03.006.
Abstract
BACKGROUND: It is unclear to what extent non-clinical psychotic experiences during childhood and adolescence share underlying aetiological mechanisms with schizophrenia. One candidate mechanism for schizophrenia involves the epigenetic status of the developing fetus, which depends on the internal folate-status of mother and child. Our study examines the relationships between multiple determinants of perinatal folate-status and development of psychotic experiences in adolescence. METHODS: Study participants were up to 5344 mother-child pairs from the Avon Longitudinal Study of Parents and their Children, UK, with information on maternal and/or child MTHFR C677T genotype, maternal folate intake (supplementation at 18/32- weeks gestation; dietary intake at 32- weeks gestation) and psychosis-like symptoms (PLIKS) for children assessed at age 12. RESULTS: Nominal evidence was observed that maternal folate supplementation at 18 weeks increased the odds of PLIKS in children (odds ratio(OR)=1.34; 95%-CI:[1.00;1.76]) and, consistent with this, that children of MTHFR C667T TT homozygous mothers had decreased odds of PLIKS (OR=0.72; 95%CI:[0.50;1.02]; recessive model) with strongest effects in boys (OR=0.44, 95%-CI:[0.22;0.79]; sex-specific p=0.029). None of the reported effects remained significant when corrected for multiple testing. CONCLUSIONS: Overall, this study found no support that maternal/child MTHFR C677T genotype and maternal folate intake during pregnancy contribute to common aetiological pathways that are shared between schizophrenia and non-clinical psychotic symptoms in adolescents, assuming that decreased folate-status increases schizophrenia risk. -
Glaser, B., Shelton, K. H., & van den Bree, M. B. M. (2010). The moderating role of close friends in the relationship between conduct problems and adolescent substance use. Journal of Adolescent Health, 47(1), 35-42. doi:10.1016/j.jadohealth.2009.12.022.
Abstract
PURPOSE: Conduct problems and peer effects are among the strongest risk factors for adolescent substance use and problem use. However, it is unclear to what extent the effects of conduct problems and peer behavior interact, and whether adolescents' capacity to refuse the offer of substances may moderate such links. This study was conducted to examine relationships between conduct problems, close friends' substance use, and refusal assertiveness with adolescents' alcohol use problems, tobacco, and marijuana use. METHODS: We studied a population-based sample of 1,237 individuals from the Cardiff Study of All Wales and North West of England Twins aged 11-18 years. Adolescent and mother-reported information was obtained. Statistical analyses included cross-sectional and prospective logistic regression models and family-based permutations. RESULTS: Conduct problems and close friends' substance use were associated with increased adolescents' substance use, whereas refusal assertiveness was associated with lower use of cigarettes, alcohol, and marijuana. Peer substance use moderated the relationship between conduct problems and alcohol use problems, such that conduct problems were only related to increased risk for alcohol use problems in the presence of substance-using friends. This effect was found in both cross-sectional and prospective analyses and confirmed using the permutation approach. CONCLUSIONS: Reduced opportunities for interaction with alcohol-using peers may lower the risk of alcohol use problems in adolescents with conduct problems. -
Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.
Abstract
* Ole Goltermann and Gökberk Alagöz contributed equally.
Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution. -
Goncharova, M. V., Jadoul, Y., Reichmuth, C., Fitch, W. T., & Ravignani, A. (2024). Vocal tract dynamics shape the formant structure of conditioned vocalizations in a harbor seal. Annals of the New York Academy of Sciences, 1538(1), 107-116. doi:10.1111/nyas.15189.
Abstract
Formants, or resonance frequencies of the upper vocal tract, are an essential part of acoustic communication. Articulatory gestures—such as jaw, tongue, lip, and soft palate movements—shape formant structure in human vocalizations, but little is known about how nonhuman mammals use those gestures to modify formant frequencies. Here, we report a case study with an adult male harbor seal trained to produce an arbitrary vocalization composed of multiple repetitions of the sound wa. We analyzed jaw movements frame-by-frame and matched them to the tracked formant modulation in the corresponding vocalizations. We found that the jaw opening angle was strongly correlated with the first (F1) and, to a lesser degree, with the second formant (F2). F2 variation was better explained by the jaw angle opening when the seal was lying on his back rather than on the belly, which might derive from soft tissue displacement due to gravity. These results show that harbor seals share some common articulatory traits with humans, where the F1 depends more on the jaw position than F2. We propose further in vivo investigations of seals to further test the role of the tongue on formant modulation in mammalian sound production. -
González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M. González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M., Moltó, M. D., CIBERSAM group, Buimer, E., Van Haren, N., Cahn, W., O’Donovan, M., Kahn, R. S., Arango, C., Hulshoff Pol, H., Janssen, J., & Schnack, H. (2024). Accelerated cortical thinning in schizophrenia is associated with rare and common predisposing variation to schizophrenia and neurodevelopmental disorders. Biological Psychiatry, 96(5), 376-389. doi:10.1016/j.biopsych.2024.03.011.
Abstract
Background
Schizophrenia is a highly heritable disorder characterized by increased cortical thinning throughout the lifespan. Studies have reported a shared genetic basis between schizophrenia and cortical thickness. However, no genes whose expression is related to abnormal cortical thinning in schizophrenia have been identified.
Methods
We conducted linear mixed models to estimate the rates of accelerated cortical thinning across 68 regions from the Desikan-Killiany atlas in individuals with schizophrenia compared to healthy controls from a large longitudinal sample (NCases = 169 and NControls = 298, aged 16-70 years). We studied the correlation between gene expression data from the Allen Human Brain Atlas and accelerated thinning estimates across cortical regions. We finally explored the functional and genetic underpinnings of the genes most contributing to accelerated thinning.
Results
We described a global pattern of accelerated cortical thinning in individuals with schizophrenia compared to healthy controls. Genes underexpressed in cortical regions exhibiting this accelerated thinning were downregulated in several psychiatric disorders and were enriched for both common and rare disrupting variation for schizophrenia and neurodevelopmental disorders. In contrast, none of these enrichments were observed for baseline cross-sectional cortical thickness differences.
Conclusions
Our findings suggest that accelerated cortical thinning, rather than cortical thickness alone, serves as an informative phenotype for neurodevelopmental disruptions in schizophrenia. We highlight the genetic and transcriptomic correlates of this accelerated cortical thinning, emphasizing the need for future longitudinal studies to elucidate the role of genetic variation and the temporal-spatial dynamics of gene expression in brain development and aging in schizophrenia.Additional information
supplementary materials -
Gordon, J. K., & Clough, S. (2024). The Flu-ID: A new evidence-based method of assessing fluency in aphasia. American Journal of Speech-Language Pathology, 33, 2972-2990. doi:10.1044/2024_AJSLP-23-00424.
Abstract
Purpose:
Assessing fluency in aphasia is diagnostically important for determining aphasia type and severity and therapeutically important for determining appropriate treatment targets. However, wide variability in the measures and criteria used to assess fluency, as revealed by a recent survey of clinicians (Gordon & Clough, 2022), results in poor reliability. Furthermore, poor specificity in many fluency measures makes it difficult to identify the underlying impairments. Here, we introduce the Flu-ID Aphasia, an evidence-based tool that provides a more informative method of assessing fluency by capturing the range of behaviors that can affect the flow of speech in aphasia.
Method:
The development of the Flu-ID was based on prior evidence about factors underlying fluency (Clough & Gordon, 2020; Gordon & Clough, 2020) and clinical perceptions about the measurement of fluency (Gordon & Clough, 2022). Clinical utility is maximized by automated counting of fluency behaviors in an Excel template. Reliability is maximized by outlining thorough guidelines for transcription and coding. Eighteen narrative samples representing a range of fluency were coded independently by the authors to examine the Flu-ID's utility, reliability, and validity.
Results:
Overall reliability was very good, with point-to-point agreement of 86% between coders. Ten of the 12 dimensions showed good to excellent reliability. Validity analyses indicated that Flu-ID scores were similar to clinician ratings on some dimensions, but differed on others. Possible reasons and implications of the discrepancies are discussed, along with opportunities for improvement.
Conclusions:
The Flu-ID assesses fluency in aphasia using a consistent and comprehensive set of measures and semi-automated procedures to generate individual fluency profiles. The profiles generated in the current study illustrate how similar ratings of fluency can arise from different underlying impairments. Supplemental materials include an analysis template, extensive guidelines for transcription and coding, a completed sample, and a quick reference guide.Additional information
supplemental material -
Goudbeek, M., & Broersma, M. (2010). The Demo/Kemo corpus: A principled approach to the study of cross-cultural differences in the vocal expression and perception of emotion. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010) (pp. 2211-2215). Paris: ELRA.
Abstract
This paper presents the Demo / Kemo corpus of Dutch and Korean emotional speech. The corpus has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors as well as judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure was used for recordings of both languages; c) the same nonsense sentence, which was constructed to be permissible in both languages, was used for recordings of both languages; and d) the emotions present in the corpus are balanced in terms of valence, arousal, and dominance. The corpus contains a comparatively large number of emotions (eight) uttered by a large number of speakers (eight Dutch and eight Korean). The counterbalanced nature of the corpus will enable a stricter investigation of language-specific versus universal aspects of emotional expression than was possible so far. Furthermore, given the carefully controlled phonetic content of the expressions, it allows for analysis of the role of specific phonetic features in emotional expression in Dutch and Korean. -
De Gregorio, C., Raimondi, T., Bevilacqua, V., Pertosa, C., Valente, D., Carugati, F., Bandoli, F., Favaro, L., Lefaux, B., Ravignani, A., & Gamba, M. (2024). Isochronous singing in 3 crested gibbon species (Nomascusspp.). Current Zoology, 70(3), 291-297. doi:10.1093/cz/zoad029.
Abstract
The search for common characteristics between the musical abilities of humans and other animal species is still taking its first steps. One of the most promising aspects from a comparative point of view is the analysis of rhythmic components, which are crucial features of human communicative performance but also well-identifiable patterns in the vocal displays of other species. Therefore, the study of rhythm is becoming essential to understand the mechanisms of singing behavior and the evolution of human communication. Recent findings provided evidence that particular rhythmic structures occur in human music and some singing animal species, such as birds and rock hyraxes, but only 2 species of nonhuman primates have been investigated so far (Indri indri and Hylobates lar). Therefore, our study aims to consistently broaden the list of species studied regarding the presence of rhythmic categories. We investigated the temporal organization in the singing of 3 species of crested gibbons (Nomascus gabriellae, Nomascus leucogenys, and Nomascus siki) and found that the most prominent rhythmic category was isochrony. Moreover, we found slight variation in songs’ tempo among species, with N. gabriellae and N. siki singing with a temporal pattern involving a gradually increasing tempo (a musical accelerando), and N. leucogenys with a more regular pattern. Here, we show how the prominence of a peak at the isochrony establishes itself as a shared characteristic in the small apes considered so far.Additional information
zoad029_suppl_supplementary_table_s1.docx -
De Gregorio, C., Maiolini, M., Raimondi, T., Carugati, F., Miaretsoa, L., Valente, D., Torti, V., Giacoma, C., Ravignani, A., & Gamba, M. (2024). Isochrony as ancestral condition to call and song in a primate. Annals of the New York Academy of Sciences, 1537(1), 41-50. doi:10.1111/nyas.15151.
Abstract
Animal songs differ from calls in function and structure, and have comparative and translational value, showing similarities to human music. Rhythm in music is often distributed in quantized classes of intervals known as rhythmic categories. These classes have been found in the songs of a few nonhuman species but never in their calls. Are rhythmic categories song-specific, as in human music, or can they transcend the song–call boundary? We analyze the vocal displays of one of the few mammals producing both songs and call sequences: Indri indri. We test whether rhythmic categories (a) are conserved across songs produced in different contexts, (b) exist in call sequences, and (c) differ between songs and call sequences. We show that rhythmic categories occur across vocal displays. Vocalization type and function modulate deployment of categories. We find isochrony (1:1 ratio, like the rhythm of a ticking clock) in all song types, but only advertisement songs show three rhythmic categories (1:1, 1:2, 2:1 ratios). Like songs, some call types are also isochronous. Isochrony is the backbone of most indri vocalizations, unlike human speech, where it is rare. In indri, isochrony underlies both songs and hierarchy-less call sequences and might be ancestral to both.Additional information
tables -
Groen, W. B., Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van der Gaag, R. J., Hagoort, P., & Buitelaar, J. K. (2010). Semantic, factual, and social language comprehension in adolescents with autism: An fMRI study. Cerebral Cortex, 20(8), 1937-1945. doi:10.1093/cercor/bhp264.
Abstract
Language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Because the left and right inferior frontal (LIF and RIF) regions are implicated with integration of speaker information, world knowledge, and semantic knowledge, we hypothesized that abnormal functioning of the LIF and RIF regions might contribute to pragmatic and semantic language deficits in autism. Brain activation of sixteen 12- to 18-year-old, high-functioning autistic participants was measured with functional magnetic resonance imaging during sentence comprehension and compared with that of twenty-six matched controls. The content of the pragmatic sentence was congruent or incongruent with respect to the speaker characteristics (male/female, child/adult, and upper class/lower class). The semantic- and world-knowledge sentences were congruent or incongruent with respect to semantic expectancies and factual expectancies about the world, respectively. In the semanticknowledge and world-knowledge condition, activation of the LIF region did not differ between groups. In sentences that required integration of speaker information, the autism group showed abnormally reduced activation of the LIF region. The results suggest that people with autism may recruit the LIF region in a different manner in tasks that demand integration of social information. -
Grönberg, D. J., Pinto de Carvalho, S. L., Dernerova, N., Norton, P., Wong, M. M. K., & Mendoza, E. (2024). Expression and regulation of SETBP1 in the song system of male zebra finches (Taeniopygia guttata) during singing. Scientific Reports, 14: 29057. doi:10.1038/s41598-024-75353-w.
Abstract
Rare de novo heterozygous loss-of-function SETBP1 variants lead to a neurodevelopmental disorder characterized by speech deficits, indicating a potential involvement of SETBP1 in human speech. However, the expression pattern of SETBP1 in brain regions associated with vocal learning remains poorly understood, along with the underlying molecular mechanisms linking it to vocal production. In this study, we examined SETBP1 expression in the brain of male zebra finches, a well-established model for studying vocal production learning. We demonstrated that zebra finch SETBP1 exhibits a greater number of exons and isoforms compared to its human counterpart. We characterized a SETBP1 antibody and showed that SETBP1 colocalized with FoxP1, FoxP2, and Parvalbumin in key song nuclei. Moreover, SETBP1 expression in neurons in Area X is significantly higher in zebra finches singing alone, than those singing courtship song to a female, or non-singers. Importantly, we found a distinctive neuronal protein expression of SETBP1 and FoxP2 in Area X only in zebra finches singing alone, but not in the other conditions. We demonstrated SETBP1´s regulatory role on FoxP2 promoter activity in vitro. Taken together, these findings provide compelling evidence for SETBP1 expression in brain regions to be crucial for vocal learning and its modulation by singing behavior.Additional information
supplementary material -
Grosseck, O., Perlman, M., Ortega, G., & Raviv, L. (2024). The iconic affordances of gesture and vocalization in emerging languages in the lab. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 223-225). Nijmegen: The Evolution of Language Conferences. -
Gubian, M., Bergmann, C., & Boves, L. (2010). Investigating word learning processes in an artificial agent. In Proceedings of the IXth IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, 18-21 Aug. 2010 (pp. 178 -184). IEEE.
Abstract
Researchers in human language processing and acquisition are making an increasing use of computational models. Computer simulations provide a valuable platform to reproduce hypothesised learning mechanisms that are otherwise very difficult, if not impossible, to verify on human subjects. However, computational models come with problems and risks. It is difficult to (automatically) extract essential information about the developing internal representations from a set of simulation runs, and often researchers limit themselves to analysing learning curves based on empirical recognition accuracy through time. The associated risk is to erroneously deem a specific learning behaviour as generalisable to human learners, while it could also be a mere consequence (artifact) of the implementation of the artificial learner or of the input coding scheme. In this paper a set of simulation runs taken from the ACORNS project is investigated. First a look `inside the box' of the learner is provided by employing novel quantitative methods for analysing changing structures in large data sets. Then, the obtained findings are discussed in the perspective of their ecological validity in the field of child language acquisition. -
Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. In M. Gullberg, & P. Indefrey (
Eds. ), The earliest stages of language learning (pp. 5-24). Malden, MA: Wiley-Blackwell. -
Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. Language Learning, 60(S2), 5-24. doi:10.1111/j.1467-9922.2010.00598.x.
Abstract
Despite the literature on the role of input in adult second-language (L2) acquisition and on artificial and statistical language learning, surprisingly little is known about how adults break into a new language in the wild. This article reports on a series of behavioral and neuroimaging studies that examine what linguistic information adults can extract from naturalistic but controlled audiovisual input in an unknown and typologically distant L2 after minimal exposure (7–14 min) without instruction or training. We tested the stepwise development of segmental, phonotactic, and lexical knowledge in Dutch adults after minimal exposure to Mandarin Chinese and the role of item frequency, speech-associated gestures, and word length at the earliest stages of learning. In an exploratory neural connectivity study we further examined the neural correlates of word recognition in a new language, identifying brain regions whose connectivity was related to performance both before and after learning. While emphasizing the complexity of the learning task, the results suggest that the adult learning mechanism is more powerful than is normally assumed when faced with small amounts of complex, continuous audiovisual language input. -
Gullberg, M., De Bot, K., & Volterra, V. (2010). Gestures and some key issues in the study of language development. In M. Gullberg, & K. De Bot (
Eds. ), Gestures in language development (pp. 3-33). Amsterdam: Benjamins. -
Gullberg, M., & De Bot, K. (
Eds. ). (2010). Gestures in language development. Amsterdam: Benjamins.Abstract
Gestures are prevalent in communication and tightly linked to language and speech. As such they can shed important light on issues of language development across the lifespan. This volume, originally published as a Special Issue of Gesture Volume 8:2 (2008), brings together studies from different disciplines that examine language development in children and adults from varying perspectives. It provides a review of common theoretical and empirical themes, and the contributions address topics such as gesture use in prelinguistic infants, the relationship between gestures and lexical development in typically and atypically developing children and in second language learners, what gestures reveal about discourse, and how all languages that adult second language speakers know can influence each other. The papers exemplify a vibrant new field of study with relevance for multiple disciplines. -
Gullberg, M. (2010). Methodological reflections on gesture analysis in second language acquisition and bilingualism research. Second Language Research, 26(1), 75-102. doi:10.1177/0267658309337639.
Abstract
Gestures, the symbolic movements speakers perform while they speak, form a closely inter-connected system with speech where gestures serve both addressee-directed (‘communicative’) and speaker-directed (’internal’) functions. This paper aims (1) to show that a combined analysis of gesture and speech offers new ways to address theoretical issues in SLA and bilingualism studies, probing SLA and bilingualism as product and process; and (2) to outline some methodological concerns and desiderata to facilitate the inclusion of gesture in SLA and bilingualism research. -
Gullberg, M., & Indefrey, P. (
Eds. ). (2010). The earliest stages of language learning. Malden, MA: Wiley-Blackwell.Abstract
To understand the nature of language learning, the factors that influence it, and the mechanisms that govern it, it is crucial to study the very earliest stages of language learning. This volume provides a state-of-the art overview of what we know about the cognitive and neurobiological aspects of the adult capacity for language learning. It brings together studies from several fields that examine learning from multiple perspectives using various methods. The papers examine learning after anything from a few minutes to months of language exposure; they target the learning of both artificial and natural languages, involve both explicit and implicit learning, and cover linguistic domains ranging from phonology and semantics to morphosyntax. The findings will inform and extend further studies of language learning in multiple disciplines. -
Gullberg, M., & Indefrey, P. (
Eds. ). (2010). The earliest stages of language learning [Special Issue]. Language Learning, 60(Supplement s2). -
Gullberg, M., & Narasimhan, B. (2010). What gestures reveal about the development of semantic distinctions in Dutch children's placement verbs. Cognitive Linguistics, 21(2), 239-262. doi:10.1515/COGL.2010.009.
Abstract
Placement verbs describe every-day events like putting a toy in a box. Dutch uses two semi-obligatory caused posture verbs (leggen ‘lay’ and zetten ‘set/stand’) to distinguish between events based on whether the located object is placed horizontally or vertically. Although prevalent in the input, these verbs cause Dutch children difficulties even at age five (Narasimhan & Gullberg, submitted). Children overextend leggen to all placement events and underextend use of zetten. This study examines what gestures can reveal about Dutch three- and five-year-olds’ semantic representations of such verbs. The results show that children gesture differently from adults in this domain. Three-year-olds express only the path of the caused motion, whereas five-year-olds, like adults, also incorporate the located object. Crucially, gesture patterns are tied to verb use: those children who over-use leggen 'lay' for all placement events only gesture about path. Conversely, children who use the two verbs differentially for horizontal and vertical placement also incorporate objects in gestures like adults. We argue that children's gestures reflect their current knowledge of verb semantics, and indicate a developmental transition from a system with a single semantic component – (caused) movement – to an (adult-like) focus on two semantic components – (caused) movement-and-object -
Guo, Y., Martin, R. C., Hamilton, C., Van Dyke, J., & Tan, Y. (2010). Neural basis of semantic and syntactic interference resolution in sentence comprehension. Procedia - Social and Behavioral Sciences, 6, 88-89. doi:10.1016/j.sbspro.2010.08.045.
-
Guzmán Chacón, E., Ovando-Tellez, M., Thiebaut de Schotten, M., & Forkel, S. J. (2024). Embracing digital innovation in neuroscience: 2023 in review at NEUROCCINO. Brain Structure & Function, 229, 251-255. doi:10.1007/s00429-024-02768-6.
-
Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.
Abstract
Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation. -
Hamans, C., & Seuren, P. A. M. (2010). Chomsky in search of a pedigree. In D. A. Kibbee (
Ed. ), Chomskyan (R)evolutions (pp. 377-394). Amsterdam/Philadelphia: Benjamins.Abstract
This paper follows the changing fortunes of Chomsky’s search for a pedigree in the history of Western thought during the late 1960s. Having achieved a unique position of supremacy in the theory of syntax and having exploited that position far beyond the narrow circles of professional syntacticians, he felt the need to shore up his theory with the authority of history. It is shown that this attempt, resulting mainly in his Cartesian Linguistics of 1966, was widely, and rightly, judged to be a radical failure, even though it led to a sudden revival of interest in the history of linguistics. Ironically, the very upswing in historical studies caused by Cartesian Linguistics ended up showing that the real pedigree belongs to Generative Semantics, developed by the same ‘angry young men’ Chomsky was so bent on destroying. -
Hammarström, H. (2010). A full-scale test of the language farming dispersal hypothesis. Diachronica, 27(2), 197-213. doi:10.1075/dia.27.2.02ham.
Abstract
One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around. -
Hammarström, H. (2010). Rarities in numeral systems. In J. Wohlgemuth, & M. Cysouw (
Eds. ), Rethinking universals. How rarities affect linguistic theory (pp. 11-60). Berlin: De Gruyter. -
Hammarström, H. (2010). The status of the least documented language families in the world. Language Documentation and Conservation, 4, 177-212. Retrieved from http://hdl.handle.net/10125/4478.
Abstract
This paper aims to list all known language families that are not yet extinct and all of whose member languages are very poorly documented, i.e., less than a sketch grammar’s worth of data has been collected. It explains what constitutes a valid family, what amount and kinds of documentary data are sufficient, when a language is considered extinct, and more. It is hoped that the survey will be useful in setting priorities for documentation fieldwork, in particular for those documentation efforts whose underlying goal is to understand linguistic diversity. -
Hanique, I., Schuppler, B., & Ernestus, M. (2010). Morphological and predictability effects on schwa reduction: The case of Dutch word-initial syllables. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 933-936).
Abstract
This corpus-based study shows that the presence and duration of schwa in Dutch word-initial syllables are affected by a word’s predictability and its morphological structure. Schwa is less reduced in words that are more predictable given the following word. In addition, schwa may be longer if the syllable forms a prefix, and in prefixes the duration of schwa is positively correlated with the frequency of the word relative to its stem. Our results suggest that the conditions which favor reduced realizations are more complex than one would expect on the basis of the current literature. -
Hanulikova, A., & Hamann, S. (2010). Illustrations of Slovak IPA. Journal of the International Phonetic Association, 40(3), 373-378. doi:10.1017/S0025100310000162.
Abstract
Slovak (sometimes also called Slovakian) is an Indo-European language belonging to the West-Slavic branch, and is most closely related to Czech. Slovak is spoken as a native language by 4.6 million speakers in Slovakia (that is by roughly 85% of the population), and by over two million Slovaks living abroad, most of them in the USA, the Czech Republic, Hungary, Canada and Great Britain (Office for Slovaks Living Abroad 2009). -
Hanulikova, A., & Weber, A. (2010). Production of English interdental fricatives by Dutch, German, and English speakers. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (
Eds. ), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 173-178). Poznan: Adam Mickiewicz University.Abstract
Non-native (L2) speakers of English often experience difficulties in producing English interdental fricatives (e.g. the voiceless [θ]), and this leads to frequent substitutions of these fricatives (e.g. with [t], [s], and [f]). Differences in the choice of [θ]-substitutions across L2 speakers with different native (L1) language backgrounds have been extensively explored. However, even within one foreign accent, more than one substitution choice occurs, but this has been less systematically studied. Furthermore, little is known about whether the substitutions of voiceless [θ] are phonetically clear instances of [t], [s], and [f], as they are often labelled. In this study, we attempted a phonetic approach to examine language-specific preferences for [θ]-substitutions by carrying out acoustic measurements of L1 and L2 realizations of these sounds. To this end, we collected a corpus of spoken English with L1 speakers (UK-English), and Dutch and German L2 speakers. We show a) that the distribution of differential substitutions using identical materials differs between Dutch and German L2 speakers, b) that [t,s,f]-substitutes differ acoustically from intended [t,s,f], and c) that L2 productions of [θ] are acoustically comparable to L1 productions. -
Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.
Abstract
The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse. -
Hartmann, S., Wacewicz, S., Ravignani, A., Valente, D., Rodrigues, E. D., Asano, R., & Jadoul, Y. (2024). Delineating the field of language evolution research: A quantitative analysis of peer-review patterns at the Joint Conference on Language Evolution (JCoLE 2022). Interaction studies, 25(1), 100-117. doi:10.1075/is.00024.har.
Abstract
Research on language evolution is an established subject area yet permeated by terminological controversies about which topics should be considered pertinent to the field and which not. By consequence, scholars focusing on language evolution struggle in providing precise demarcations of the discipline, where even the very central notions of evolution and language are elusive. We aimed at providing a data-driven characterisation of language evolution as a field of research by relying on quantitative analysis of data drawn from 697 reviews on 255 submissions from the Joint Conference on Language Evolution 2022 (Kanazawa, Japan). Our results delineate a field characterized by a core of main research topics such as iconicity, sign language, multimodality. Despite being explored within the framework of language evolution research, only very recently these topics became popular in linguistics. As a result, language evolution has the potential to emerge as a forefront of linguistic research, bringing innovation to the study of language. We also see the emergence of more recent topics like rhythm, music, and vocal learning. Furthermore, the community identifies cognitive science, primatology, archaeology, palaeoanthropology, and genetics as key areas, encouraging empirical rather than theoretical work. With new themes, models, and methodologies emerging, our results depict an intrinsically multidisciplinary and evolving research field, likely adapting as language itself. -
Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2010). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Review article]. Trends in Cognitive Sciences, 14, 552-560. doi:10.1016/j.tics.2010.09.006.
Abstract
Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways. -
Hegemann, L., Corfield, E. C., Askelund, A. D., Allegrini, A. G., Askeland, R. B., Ronald, A., Ask, H., St Pourcain, B., Andreassen, O. A., Hannigan, L. J., & Havdahl, A. (2024). Genetic and phenotypic heterogeneity in early neurodevelopmental traits in the Norwegian Mother, Father and Child Cohort Study. Molecular Autism, 15: 25. doi:10.1186/s13229-024-00599-0.
Abstract
Background
Autism and different neurodevelopmental conditions frequently co-occur, as do their symptoms at sub-diagnostic threshold levels. Overlapping traits and shared genetic liability are potential explanations.
Methods
In the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), we leverage item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41,708–58,630) using maternal reports on 76 items assessing children’s motor and language development, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests.
Results
We identified 11 latent factors at the phenotypic level. These factors showed associations with diagnoses of autism and other neurodevelopmental conditions. Most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (items rg range = − 0.27–0.78), ADHD (items rg range = − 0.40–1), and schizophrenia (items rg range = − 0.24–0.34). We find little evidence of common genetic liability across all neurodevelopmental traits but more so for several genetic factors across more specific areas of neurodevelopment, particularly social and communication traits. Some of these factors, such as one capturing prosocial behavior, overlap with factors found in the phenotypic analyses. Other areas, such as motor development, seemed to have more heterogenous etiology, with specific traits showing a less consistent pattern of genetic correlations with each other.
Conclusions
These exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits and their relation to autism and other neurodevelopmental conditions. This in turn could have implications for clinical screening tools and programs.Additional information
supplementary tables supplementary methods, results, and figures link to preprint -
Heid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R. and 22 moreHeid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R., Illig, T., Dupuis, J., Glaser, B., Spector, T., Guralnik, J., Egan, J. M., Florez, J. C., Evans, D. M., Soranzo, N., Bandinelli, S., Carlson, O. D., Frayling, T. M., Burling, K., Smith, G. D., Mooser, V., Ferrucci, L., Meigs, J. B., Vollenweider, P., Dijk, K. W. v., Pramstaller, P., Kronenberg, F., & van Duijn, C. M. (2010). Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: Results of genome-wide association analyses including 4659 European individuals. Atherosclerosis, 208(2), 412-420. doi:10.1016/j.atherosclerosis.2009.11.035.
Abstract
OBJECTIVE: Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. METHODS: We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0 x 10(-4) for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. RESULTS: The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3 x 10(-24)) as well as in both women- and men-specific analyses (p=8.7 x 10(-17) and p=2.5 x 10(-11), respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). CONCLUSIONS: We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin.Additional information
http://www.sciencedirect.com/science/article/pii/S0021915009009927#appd002 -
Heim, F., Scharff, C., Fisher, S. E., Riebel, K., & Ten Cate, C. (2024). Auditory discrimination learning and acoustic cue weighing in female zebra finches with localized FoxP1 knockdowns. Journal of Neurophysiology, 131, 950-963. doi:10.1152/jn.00228.2023.
Abstract
Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.
NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition. -
Heinemann, T. (2010). The question–response system of Danish. Journal of Pragmatics, 42, 2703-2725. doi:10.1016/j.pragma.2010.04.007.
Abstract
This paper provides an overview of the question–response system of Danish, based on a collection of 350 questions (and responses) collected from video recordings of naturally occurring face-to-face interactions between native speakers of Danish. The paper identifies the lexico-grammatical options for formulating questions, the range of social actions that can be implemented through questions and the relationship between questions and responses. It further describes features where Danish questions differ from a range of other languages in terms of, for instance, distribution and the relationship between question format and social action. For instance, Danish has a high frequency of interrogatively formatted questions and questions that are negatively formulated, when compared to languages that have the same grammatical options. In terms of action, Danish shows a higher number of questions that are used for making suggestions, offers and requests and does not use repetition as a way of answering a question as often as other languages. -
Heritage, J., Elliott, M. N., Stivers, T., Richardson, A., & Mangione-Smith, R. (2010). Reducing inappropriate antibiotics prescribing: The role of online commentary on physical examination findings. Patient Education and Counseling, 81, 119-125. doi:10.1016/j.pec.2009.12.005.
Abstract
Objective: This study investigates the relationship of ‘online commentary’(contemporaneous physician comments about physical examination [PE] findings) with (i) parent questioning of the treatment recommendation and (ii) inappropriate antibiotic prescribing. Methods: A nested cross-sectional study of 522 encounters motivated by upper respiratory symptoms in 27 California pediatric practices (38 pediatricians). Physicians completed a post-visit survey regarding physical examination findings, diagnosis, treatment, and whether they perceived the parent as expecting an antibiotic. Taped encounters were coded for ‘problem’ online commentary (PE findings discussed as significant or clearly abnormal) and ‘no problem’ online commentary (PE findings discussed reassuringly as normal or insignificant). Results: Online commentary during the PE occurred in 73% of visits with viral diagnoses (n = 261). Compared to similar cases with ‘no problem’ online commentary, ‘problem’ comments were associated with a 13% greater probability of parents uestioning a non-antibiotic treatment plan (95% CI 0-26%, p = .05,) and a 27% (95% CI: 2-52%, p < .05) greater probability of an inappropriate antibiotic prescription. Conclusion: With viral illnesses, problematic online comments are associated with more pediatrician-parent conflict over non-antibiotic treatment recommendations. This may increase inappropriate antibiotic prescribing. Practice implications: In viral cases, physicians should consider avoiding the use of problematic online commentary. -
Hersh, T. A., Ravignani, A., & Whitehead, H. (2024). Cetaceans are the next frontier for vocal rhythm research. Proceedings of the National Academy of Sciences of the United States of America, 121(25): e2313093121. doi:10.1073/pnas.2313093121.
Abstract
While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans—whales, dolphins, and porpoises—are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother–infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.Additional information
supporting information -
Hill, C. (2010). Emergency language documentation teams: The Cape York Peninsula experience. In J. Hobson, K. Lowe, S. Poetsch, & M. Walsh (
Eds. ), Re-awakening languages: Theory and practice in the revitalisation of Australia’s Indigenous languages (pp. 418-432). Sydney: Sydney University Press. -
Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
-
Hintz, F. (2010). Speech and speaker recognition in dyslexic individuals. Bachelor Thesis, Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig)/University of Leipzig.
Files private
Request files -
Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.
Abstract
Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.Additional information
network analysis of dataset A and B -
Hintz, F., & Meyer, A. S. (
Eds. ). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1). -
Hintz, F., Voeten, C. C., Dobó, D., Lukics, K. S., & Lukács, Á. (2024). The role of general cognitive skills in integrating visual and linguistic information during sentence comprehension: Individual differences across the lifespan. Scientific Reports, 14: 17797. doi:10.1038/s41598-024-68674-3.
Abstract
Individuals exhibit massive variability in general cognitive skills that affect language processing. This variability is partly developmental. Here, we recruited a large sample of participants (N = 487), ranging from 9 to 90 years of age, and examined the involvement of nonverbal processing speed (assessed using visual and auditory reaction time tasks) and working memory (assessed using forward and backward Digit Span tasks) in a visual world task. Participants saw two objects on the screen and heard a sentence that referred to one of them. In half of the sentences, the target object could be predicted based on verb-selectional restrictions. We observed evidence for anticipatory processing on predictable compared to non-predictable trials. Visual and auditory processing speed had main effects on sentence comprehension and facilitated predictive processing, as evidenced by an interaction. We observed only weak evidence for the involvement of working memory in predictive sentence comprehension. Age had a nonlinear main effect (younger adults responded faster than children and older adults), but it did not differentially modulate predictive and non-predictive processing, nor did it modulate the involvement of processing speed and working memory. Our results contribute to delineating the cognitive skills that are involved in language-vision interactions.Additional information
supplementary information -
Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.
Abstract
We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl. -
Holler, J. (2010). Speakers’ use of interactive gestures to mark common ground. In S. Kopp, & I. Wachsmuth (
Eds. ), Gesture in embodied communication and human-computer interaction. 8th International Gesture Workshop, Bielefeld, Germany, 2009; Selected Revised Papers (pp. 11-22). Heidelberg: Springer Verlag. -
Hope, T. M. H., Neville, D., Talozzi, L., Foulon, C., Forkel, S. J., Thiebaut de Schotten, M., & Price, C. J. (2024). Testing the disconnectome symptom discoverer model on out-of-sample post-stroke language outcomes. Brain, 147(2), e11-e13. doi:10.1093/brain/awad352.
Abstract
Stroke is common, and its consequent brain damage can cause various cognitive impairments. Associations between where and how much brain lesion damage a patient has suffered, and the particular impairments that injury has caused (lesion-symptom associations) offer potentially compelling insights into how the brain implements cognition.1 A better understanding of those associations can also fill a gap in current stroke medicine by helping us to predict how individual patients might recover from post-stroke impairments.2 Most recent work in this area employs machine learning models trained with data from stroke patients whose mid-to-long-term outcomes are known.2-4 These machine learning models are tested by predicting new outcomes—typically scores on standardized tests of post-stroke impairment—for patients whose data were not used to train the model. Traditionally, these validation results have been shared in peer-reviewed publications describing the model and its training. But recently, and for the first time in this field (as far as we know), one of these pre-trained models has been made public—The Disconnectome Symptom Discoverer model (DSD) which draws its predictors from structural disconnection information inferred from stroke patients’ brain MRI.5
Here, we test the DSD model on wholly independent data, never seen by the model authors, before they published it. Specifically, we test whether its predictive performance is just as accurate as (i.e. not significantly worse than) that reported in the original (Washington University) dataset, when predicting new patients’ outcomes at a similar time post-stroke (∼1 year post-stroke) and also in another independent sample tested later (5+ years) post-stroke. A failure to generalize the DSD model occurs if it performs significantly better in the Washington data than in our data from patients tested at a similar time point (∼1 year post-stroke). In addition, a significant decrease in predictive performance for the more chronic sample would be evidence that lesion-symptom associations differ at ∼1 year post-stroke and >5 years post-stroke. -
Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2024). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research, 67, 4015-4024. doi:10.1044/2023_JSLHR-23-00081.
Abstract
Purpose:
To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.
Method:
Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.
Results:
There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.
Conclusions:
Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone. -
Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.
Abstract
Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology. -
Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.
Abstract
This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings. -
De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.
Abstract
Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity. -
Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.
Abstract
In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing. -
Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.
Abstract
Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception. -
Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.
Abstract
In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing. -
Huettig, F., & Christiansen, M. H. (2024). Can large language models counter the recent decline in literacy levels? An important role for cognitive science. Cognitive Science, 48(8): e13487. doi:10.1111/cogs.13487.
Abstract
Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately US-based tech companies. -
Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/JNEUROSCI.1278-10.2010.
Abstract
The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning. -
Hulten, A. (2010). Sanan tuottaminen [Word production]. In Kieli ja aivot [Language and the Brain - Textbook series] (pp. 106-116).
-
Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.
Abstract
The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009. -
Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
-
Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. In M. Gullberg, & P. Indefrey (
Eds. ), The earliest stages of language learning (pp. 1-4). Malden, MA: Wiley-Blackwell. -
Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.
Abstract
The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.Additional information
http://hmg.oxfordjournals.org/content/19/7/1379/suppl/DC1 -
Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.
Abstract
The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles -
Jadoul, Y., De Boer, B., & Ravignani, A. (2024). Parselmouth for bioacoustics: Automated acoustic analysis in Python. Bioacoustics, 33(1), 1-19. doi:10.1080/09524622.2023.2259327.
Abstract
Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician’s workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth. -
Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.
Abstract
Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion. -
Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.
Abstract
Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate. -
Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.
Abstract
To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience. -
Jara-Ettinger, J., & Rubio-Fernandez, P. (2024). Demonstratives as attention tools: Evidence of mentalistic representations in language. Proceedings of the National Academy of Sciences of the United States of America, 121(32): e2402068121. doi:10.1073/pnas.2402068121.
Abstract
Linguistic communication is an intrinsically social activity that enables us to share thoughts across minds. Many complex social uses of language can be captured by domain-general representations of other minds (i.e., mentalistic representations) that externally modulate linguistic meaning through Gricean reasoning. However, here we show that representations of others’ attention are embedded within language itself. Across ten languages, we show that demonstratives—basic grammatical words (e.g.,“this”/“that”) which are evolutionarily ancient, learned early in life, and documented in all known languages—are intrinsic attention tools. Beyond their spatial meanings, demonstratives encode both joint attention and the direction in which the listenermmust turn to establish it. Crucially, the frequency of the spatial and attentional uses of demonstratives varies across languages, suggesting that both spatial and mentalistic representations are part of their conventional meaning. Using computational modeling, we show that mentalistic representations of others’ attention are internally encoded in demonstratives, with their effect further boosted by Gricean reasoning. Yet, speakers are largely unaware of this, incorrectly reporting that they primarily capture spatial representations. Our findings show that representations of other people’s cognitive states (namely, their attention) are embedded in language and suggest that the most basic building blocks of the linguistic system crucially rely on social cognition.Additional information
pnas.2402068121.sapp.pdf -
Järvikivi, J., & Pyykkönen, P. (2010). Lauseiden ymmärtäminen [Engl. Sentence comprehension]. In P. Korpilahti, O. Aaltonen, & M. Laine (
Eds. ), Kieli ja aivot: Kommunikaation perusteet, häiriöt ja kuntoutus (pp. 117-125). Turku: Turku yliopisto.Abstract
Kun kuuntelemme puhetta tai luemme tekstiä, alamme välittömästi rakentaa koherenttia tulkintaa. Toisin kuin lukemisessa, puheen havaitsemisessa kuulija voi harvoin kontrolloida nopeutta, jolla hänelle puhutaan. Huolimatta hyvin nopeasta syötteestä - noin 4-7 tavua sekunnissa - ihmiset kykenevät tulkitsemaan puhetta hyvin vaivattomasti. Lauseen ymmärtämisen tutkimuksessa selvitetäänkin, miten tällainen nopea ja useimmiten vaivaton tulkintaprosessi tapahtuu, mitkä kognitiiviset prosessit osallistuvat reaaliaikaiseen tulkintaan ja millaista informaatiota missäkin vaiheessa prosessointia ihminen käyttää hyväkseen johdonmukaisen tulkinnan muodostamiseksi. Tämä kappale on katsaus lauseen ymmärtämisen prosesseihin ja niiden tutkimukseen. Käsittelemme lyhyesti prosessointimalleja, aikuisten ja lasten kielen suhdetta, lauseen sisäisten ja välisten viittaussuhteiden tulkintaa ja sensorisen ympäristön sekä motorisen toiminnan roolia lauseiden tulkintaprosessissa. -
Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.
Abstract
Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time. -
Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
-
Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.
Abstract
Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV. -
Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.
Abstract
When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process. -
Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.
Abstract
In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition. -
Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.
Abstract
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested. -
Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.
Abstract
Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required. -
Joshi, A., Mohanty, R., Kanakanti, M., Mangla, A., Choudhary, S., Barbate, M., & Modi, A. (2024). iSign: A benchmark for Indian Sign Language processing. In L.-W. Ku, A. Martins, & V. Srikumar (
Eds. ), Findings of the Association for Computational Linguistics ACL 2024 (pp. 10827-10844). Bangkok, Thailand: Association for Computational Linguistics.Abstract
Indian Sign Language has limited resources for developing machine learning and data-driven approaches for automated language processing. Though text/audio-based language processing techniques have shown colossal research interest and tremendous improvements in the last few years, Sign Languages still need to catch up due to the need for more resources. To bridge this gap, in this work, we propose iSign: a benchmark for Indian Sign Language (ISL) Processing. We make three primary contributions to this work. First, we release one of the largest ISL-English datasets with more than video-sentence/phrase pairs. To the best of our knowledge, it is the largest sign language dataset available for ISL. Second, we propose multiple NLP-specific tasks (including SignVideo2Text, SignPose2Text, Text2Pose, Word Prediction, and Sign Semantics) and benchmark them with the baseline models for easier access to the research community. Third, we provide detailed insights into the proposed benchmarks with a few linguistic insights into the working of ISL. We streamline the evaluation of Sign Language processing, addressing the gaps in the NLP research community for Sign Languages. We release the dataset, tasks and models via the following website: https://exploration-lab.github.io/iSign/
Additional information
dataset, tasks, models -
Josserand, M., Pellegrino, F., Grosseck, O., Dediu, D., & Raviv, L. (2024). Adapting to individual differences: An experimental study of variation in language evolution. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 286-289). Nijmegen: The Evolution of Language Conferences. -
Josserand, M., Pellegrino, F., Grosseck, O., Dediu, D., & Raviv, L. (2024). Adapting to individual differences: An experimental study of language evolution in heterogeneous populations. Cognitive Science: a multidisciplinary journal, 48(11): e70011. doi:10.1111/cogs.70011.
Abstract
Variations in language abilities, use, and production style are ubiquitous within any given population. While research on language evolution has traditionally overlooked the potential importance of such individual differences, these can have an important impact on the trajectory of language evolution and ongoing change. To address this gap, we use a group communication game for studying this mechanism in the lab, in which micro-societies of interacting participants develop and use artificial languages to successfully communicate with each other. Importantly, one participant in the group is assigned a keyboard with a limited inventory of letters (simulating a speech impairment that individuals may encounter in real life), forcing them to communicate differently than the rest. We test how languages evolve in such heterogeneous groups and whether they adapt to accommodate the unique characteristics of individuals with language idiosyncrasies. Our results suggest that language evolves differently in groups where some individuals have distinct language abilities, eliciting more innovative elements at the cost of reduced communicative success and convergence. Furthermore, we observed strong partner-specific accommodation to the minority individual, which carried over to the group level. Importantly, the degree of group-wide adaptation was not uniform and depended on participants’ attachment to established language forms. Our findings provide compelling evidence that individual differences can permeate and accumulate within a linguistic community, ultimately driving changes in languages over time. They also underscore the importance of integrating individual differences into future research on language evolution.Additional information
full analyses and plots -
Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (
Eds. ), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press. -
Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (
Eds. ), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division. -
Kakimoto, N., Wongratwanich, P., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Shimabukuro, K., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2024). Comparison of T2 values of the displaced unilateral disc and retrodiscal tissue of temporomandibular joints and their implications. Scientific Reports, 14: 1705. doi:10.1038/s41598-024-52092-6.
Abstract
Unilateral anterior disc displacement (uADD) has been shown to affect the contralateral joints qualitatively. This study aims to assess the quantitative T2 values of the articular disc and retrodiscal tissue of patients with uADD at 1.5 Tesla (T). The study included 65 uADD patients and 17 volunteers. The regions of interest on T2 maps were evaluated. The affected joints demonstrated significantly higher articular disc T2 values (31.5 ± 3.8 ms) than those of the unaffected joints (28.9 ± 4.5 ms) (P < 0.001). For retrodiscal tissue, T2 values of the unaffected (37.8 ± 5.8 ms) and affected joints (41.6 ± 7.1 ms) were significantly longer than those of normal volunteers (34.4 ± 3.2 ms) (P < 0.001). Furthermore, uADD without reduction (WOR) joints (43.3 ± 6.8 ms) showed statistically higher T2 values than the unaffected joints of both uADD with reduction (WR) (33.9 ± 3.8 ms) and uADDWOR (38.9 ± 5.8 ms), and the affected joints of uADDWR (35.8 ± 4.4 ms). The mean T2 value of the unaffected joints of uADDWOR was significantly longer than that of healthy volunteers (P < 0.001). These results provided quantitative evidence for the influence of the affected joints on the contralateral joints. -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.
Abstract
We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing. -
Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development. -
Karadöller, D. Z., Peeters, D., Manhardt, F., Özyürek, A., & Ortega, G. (2024). Iconicity and gesture jointly facilitate learning of second language signs at first exposure in hearing non-signers. Language Learning, 74(4), 781-813. doi:10.1111/lang.12636.
Abstract
When learning a spoken second language (L2), words overlapping in form and meaning with one’s native language (L1) help break into the new language. When non-signing speakers learn a sign language as L2, such forms are absent because of the modality differences (L1:speech, L2:sign). In such cases, non-signing speakers might use iconic form-meaning mappings in signs or their own gestural experience as gateways into the to-be-acquired sign language. Here, we investigated how both these factors may contribute jointly to the acquisition of sign language vocabulary by hearing non-signers. Participants were presented with three types of sign in NGT (Sign Language of the Netherlands): arbitrary signs, iconic signs with high or low gesture overlap. Signs that were both iconic and highly overlapping with gestures boosted learning most at first exposure, and this effect remained the day after. Findings highlight the influence of modality-specific factors supporting the acquisition of a signed lexicon. -
Karadöller*, D. Z., Sümer*, B., & Özyürek, A. (2024). First-language acquisition in a multimodal language framework: Insights from speech, gesture, and sign. First Language. Advance online publication. doi:10.1177/01427237241290678.
Abstract
*=shared first authorship
Children across the world acquire their first language(s) naturally, regardless of typology or modality (e.g. sign or spoken). Various attempts have been made to explain the puzzle of language acquisition using several approaches, trying to understand to what extent it can be explained by what children bring to language-learning situations as well as what they learn from the input and the interactive context. However, most of these approaches consider only speech development, thus ignoring the inherently multimodal nature of human language. As a multimodal view of language is becoming more widely adopted for the study of adult language, a multimodal approach to language acquisition is inevitable. Not only do children have the capacity to learn spoken and sign language equally easily, but spoken language acquisition consists of learning to coordinate linguistic expressions in both modalities, that is, in both speech and gesture. To provide a step forward in this direction, this article aims to synthesize findings from research studies that take a multimodal perspective on language acquisition in different sign and spoken languages, including the development of speech and accompanying gestures. Our review shows that while some aspects of language acquisition seem to be modality-independent, others might differ according to the affordances of each modality when used separately as well as together (either in sign, speech, and/or gesture). We argue that these findings need to be integrated into our understanding of language acquisition. We also identify which areas need future research for both spoken and sign language acquisition, taking into account not only multimodal but also cross-linguistic variation. -
Karsan, Ç., Ocak, F., & Bulut, T. (2024). Characterization of speech and language phenotype in the 8p23.1 syndrome. European Child & Adolescent Psychiatry, 33, 3671-3678. doi:10.1007/s00787-024-02448-0.
Abstract
The 8p23.1 duplication syndrome is a rare genetic condition with an estimated prevalence rate of 1 out of 58,000. Although the syndrome was associated with speech and language delays, a comprehensive assessment of speech and language functions has not been undertaken in this population. To address this issue, the present study reports rigorous speech and language, in addition to oral-facial and developmental, assessment of a 50-month-old Turkish-speaking boy who was diagnosed with the 8p23.1 duplication syndrome. Standardized tests of development, articulation and phonology, receptive and expressive language and a language sample analysis were administered to characterize speech and language skills in the patient. The language sample was obtained in an ecologically valid, free play and conversation context. The language sample was then analyzed and compared to a database of age-matched typically-developing children (n = 33) in terms of intelligibility, morphosyntax, semantics/vocabulary, discourse, verbal facility and percentage of errors at word and utterance levels. The results revealed mild to severe problems in articulation and phonology, receptive and expressive language skills, and morphosyntax (mean length of utterance in morphemes). Future research with larger sample sizes and employing detailed speech and language assessment is needed to delineate the speech and language profile in individuals with the 8p23.1 duplication syndrome, which will guide targeted speech and language interventions.
Share this page