Publications

Displaying 401 - 500 of 643
  • Montero-Melis, G., Van Paridon, J., Ostarek, M., & Bylund, E. (2022). No evidence for embodiment: The motor system is not needed to keep action words in working memory. Cortex, 150, 108-125. doi:10.1016/j.cortex.2022.02.006.

    Abstract

    Increasing evidence implicates the sensorimotor systems with high-level cognition, but the extent to which these systems play a functional role remains debated. Using an elegant design, Shebani and Pulvermüller (2013) reported that carrying out a demanding rhythmic task with the hands led to selective impairment of working memory for hand-related words (e.g., clap), while carrying out the same task with the feet led to selective memory impairment for foot-related words (e.g., kick). Such a striking double dissociation is acknowledged even by critics to constitute strong evidence for an embodied account of working memory. Here, we report on an attempt at a direct replication of this important finding. We followed a sequential sampling design and stopped data collection at N=77 (more than five times the original sample size), at which point the evidence for the lack of the critical selective interference effect was very strong (BF01 = 91). This finding constitutes strong evidence against a functional contribution of the motor system to keeping action words in working memory. Our finding fits into the larger emerging picture in the field of embodied cognition that sensorimotor simulations are neither required nor automatic in high-level cognitive processes, but that they may play a role depending on the task. Importantly, we urge researchers to engage in transparent, high-powered, and fully pre-registered experiments like the present one to ensure the field advances on a solid basis.
  • Mooijman, S., Schoonen, R., Roelofs, A., & Ruiter, M. B. (2022). Executive control in bilingual aphasia: A systematic review. Bilingualism: Language and Cognition, 25(1), 13-28. doi:10.1017/S136672892100047X.

    Abstract

    Much research has been dedicated to the effects of bilingualism on executive control (EC). For bilinguals with aphasia, the interplay with EC is complex. In this systematic review, we synthesize research on this topic and provide an overview of the current state of the field. First, we examine the evidence for EC deficits in bilingual persons with aphasia (bPWA). We then discuss the domain generality of bilingual language control impairments. Finally, we evaluate the bilingual advantage hypothesis in bPWA. We conclude that (1) EC impairments in bPWA are frequently observed, (2) experimental results on the relationship between linguistic and domain-general control are mixed, (3) bPWA with language control problems in everyday communication have domain-general EC problems, and (4) there are indications for EC advantages in bPWA. We end with directions for experimental work that could provide better insight into the intricate relationship between EC and bilingual aphasia.
  • Mooijman, S., Bos, L. S., De Witte, E., Vincent, A., Visch-Brink, E., & Satoer, D. (2022). Language processing in glioma patients: speed or accuracy as a sensitive measure? Aphasiology, 36(12), 1467-1491. doi:10.1080/02687038.2021.1970099.

    Abstract

    Background

    Glioma (brain tumour) patients can suffer from mild linguistic and non-linguistic cognitive problems when the glioma is localised in an eloquent brain area. Word-finding problems are among the most frequently reported complaints. However, mild problems are difficult to measure with standard language tests because they are generally designed for more severe aphasic patients.

    Aims

    The aim of the present study was to investigate whether word-finding problems reported by patients with a glioma can be objectified with a standard object naming test, and a linguistic processing speed test. In addition, we examined whether word-finding problems and linguistic processing speed are related to non-verbal cognitive abilities.

    Methods & Procedures

    We tested glioma patients (N=36) as part of their standard pre-treatment clinical work-up. Word-finding problems were identified by a clinical linguist during the anamnesis. Linguistic processing speed was assessed with a newly designed sentence judgment test (SJT) as part of the Diagnostic Instrument for Mild Aphasia (DIMA), lexical retrieval with the Boston Naming Test (BNT), presence of aphasia with a Token Test (TT), and non-verbal processing with the Trail Making Test A and B (TMT). Test performances of glioma patients were compared to those of healthy control participants (N=35).

    Outcomes & Results

    The results show that many glioma patients (58%) report word-finding problems; these complaints were in only half of the cases supported by deviant scores on the BNT. Moreover, the presence of reported word-finding problems did not correlate with the BNT scores. However, word-finding problems were significantly correlated with reaction times on the SJT and the TMT. Although there were no significant differences between the patient and control group on the SJT, a subgroup of patients with a glioma in the frontal lobe of the language-dominant hemisphere was slower on the SJT. Finally, performance on the SJT and TMT were significantly correlated in the patient group but not in the control group.

    Conclusions

    Linguistic processing speed appears to be an important factor in explaining reported word-finding problems. Moreover, the overlap between speed of language processing and non-verbal processing indicates that patients may rely on more domain-general cognitive abilities as compared to healthy participants. The variability observed between patients emphasises the need for tailored neuro-linguistic assessments including an extensive anamnesis regarding language problems in clinical work-up.
  • Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.

    Abstract

    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Muntendam, A., Van Rijswijk, R., Severijnen, G. G. A., & Dijkstra, T. (2022). The role of stress position in bilingual auditory word recognition: Cognate processing in Turkish and Dutch. Bilingualism: Language and Cognition, 25, 679-690. doi:10.1017/S1366728922000037.

    Abstract

    We examined the effect of word stress position on bilingual auditory cognate processing. Turkish–Dutch early bilinguals who are dominant in their L2 (Dutch) performed an auditory lexical decision task in Turkish or Dutch. While Dutch has variable word stress, with a tendency for penultimate stress, stress in Turkish is mostly predictable and usually falls on the ultimate syllable. Our tasks included two-syllable cognates with penultimate stress in both languages, ultimate stress in both languages, or ultimate stress in Turkish and penultimate stress in Dutch. Some cognate facilitation effects arose in Dutch, while inhibition was found in Turkish. Cognates with ultimate stress were processed faster than cognates with penultimate stress, in both languages. This shows that in Turkish–Dutch early bilinguals, cognate processing depends on Turkish stress position, although Dutch is dominant. Together, the findings support the view that cognates have separate, though linked representations.

    Additional information

    tables S1-S20
  • Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z., Segaert, K., Hagoort, P., & Tandon, N. (2022). Minimal phrase composition revealed by intracranial recordings. The Journal of Neuroscience, 42(15), 3216-3227. doi:10.1523/JNEUROSCI.1575-21.2022.

    Abstract

    The ability to comprehend phrases is an essential integrative property of the brain. Here we evaluate the neural processes that enable the transition from single word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 19 female) implanted with penetrating depth or surface subdural intracranial electrodes heard auditory recordings of adjective-noun, pseudoword-noun and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low frequency power and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210–300 ms) and pseudoword processing (approximately 300–700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region comprised of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Nayak, S., Coleman, P. L., Ladányi, E., Nitin, R., Gustavson, D. E., Fisher, S. E., Magne, C. L., & Gordon, R. L. (2022). The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework for understanding musicality-language links across the lifespan. Neurobiology of Language, 3(4), 615-664. doi:10.1162/nol_a_00079.

    Abstract

    Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Neumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N. and 41 moreNeumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N., Tiesler, C. M. T., Wang, C., Wills, A., Zhou, A., Alemany, S., Bisgaard, H., Bønnelykke, K., Davies, G. E., Hakulinen, C., Henders, A. K., Hyppönen, E., Stokholm, J., Bartels, M., Hottenga, J.-J., Heinrich, J., Hewitt, J., Keltikangas-Järvinen, L., Korhonen, T., Kaprio, J., Lahti, J., Lahti-Pulkkinen, M., Lehtimäki, T., Middeldorp, C. M., Najman, J. M., Pennell, C., Power, C., Oldehinkel, A. J., Plomin, R., Räikkönen, K., Raitakari, O. T., Rimfeld, K., Sass, L., Snieder, H., Standl, M., Sunyer, J., Williams, G. M., Bakermans-Kranenburg, M. J., Boomsma, D. I., Van IJzendoorn, M. H., Hartman, C. A., & Tiemeier, H. (2022). A genome-wide association study of total child psychiatric problems scores. PLOS ONE, 17(8): e0273116. doi:10.1371/journal.pone.0273116.

    Abstract

    Substantial genetic correlations have been reported across psychiatric disorders and numerous cross-disorder genetic variants have been detected. To identify the genetic variants underlying general psychopathology in childhood, we performed a genome-wide association study using a total psychiatric problem score. We analyzed 6,844,199 common SNPs in 38,418 school-aged children from 20 population-based cohorts participating in the EAGLE consortium. The SNP heritability of total psychiatric problems was 5.4% (SE = 0.01) and two loci reached genome-wide significance: rs10767094 and rs202005905. We also observed an association of SBF2, a gene associated with neuroticism in previous GWAS, with total psychiatric problems. The genetic effects underlying the total score were shared with common psychiatric disorders only (attention-deficit/hyperactivity disorder, anxiety, depression, insomnia) (rG > 0.49), but not with autism or the less common adult disorders (schizophrenia, bipolar disorder, or eating disorders) (rG < 0.01). Importantly, the total psychiatric problem score also showed at least a moderate genetic correlation with intelligence, educational attainment, wellbeing, smoking, and body fat (rG > 0.29). The results suggest that many common genetic variants are associated with childhood psychiatric symptoms and related phenotypes in general instead of with specific symptoms. Further research is needed to establish causality and pleiotropic mechanisms between related traits.

    Additional information

    Full summary results
  • Niarchou, M., Gustavson, D. E., Sathirapongsasuti, J. F., Anglada-Tort, M., Eising, E., Bell, E., McArthur, E., Straub, P., The 23andMe Research Team, McAuley, J. D., Capra, J. A., Ullén, F., Creanza, N., Mosing, M. A., Hinds, D., Davis, L. K., Jacoby, N., & Gordon, R. L. (2022). Genome-wide association study of musical beat synchronization demonstrates high polygenicity. Nature Human Behaviour, 6(9), 1292-1309. doi:10.1038/s41562-022-01359-x.

    Abstract

    Moving in synchrony to the beat is a fundamental component of musicality. Here we conducted a genome-wide association study to identify common genetic variants associated with beat synchronization in 606,825 individuals. Beat synchronization exhibited a highly polygenic architecture, with 69 loci reaching genome-wide significance (P < 5 × 10−8) and single-nucleotide-polymorphism-based heritability (on the liability scale) of 13%–16%. Heritability was enriched for genes expressed in brain tissues and for fetal and adult brain-specific gene regulatory elements, underscoring the role of central-nervous-system-expressed genes linked to the genetic basis of the trait. We performed validations of the self-report phenotype (through separate experiments) and of the genome-wide association study (polygenic scores for beat synchronization were associated with patients algorithmically classified as musicians in medical records of a separate biobank). Genetic correlations with breathing function, motor function, processing speed and chronotype suggest shared genetic architecture with beat synchronization and provide avenues for new phenotypic and genetic explorations.

    Additional information

    supplementary information
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2022). The use of exemplars differs between native and non-native listening. Bilingualism: Language and Cognition, 25(5), 841-855. doi:10.1017/S1366728922000116.

    Abstract

    This study compares the role of exemplars in native and non-native listening. Two English identity priming experiments were conducted with native English, Dutch non-native, and Spanish non-native listeners. In Experiment 1, primes and targets were spoken in the same or a different voice. Only the native listeners showed exemplar effects. In Experiment 2, primes and targets had the same or a different degree of vowel reduction. The Dutch, but not the Spanish, listeners were familiar with this reduction pattern from their L1 phonology. In this experiment, exemplar effects only arose for the Spanish listeners. We propose that in these lexical decision experiments the use of exemplars is co-determined by listeners’ available processing resources, which is modulated by the familiarity with the variation type from their L1 phonology. The use of exemplars differs between native and non-native listening, suggesting qualitative differences between native and non-native speech comprehension processes.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Nordlinger, R., Garrido Rodriguez, G., & Kidd, E. (2022). Sentence planning and production in Murrinhpatha, an Australian 'free word order' language. Language, 98(2), 187-220. Retrieved from https://muse.jhu.edu/article/857152.

    Abstract

    Psycholinguistic theories are based on a very small set of unrepresentative languages, so it is as yet unclear how typological variation shapes mechanisms supporting language use. In this article we report the first on-line experimental study of sentence production in an Australian free word order language: Murrinhpatha. Forty-six adult native speakers of Murrinhpatha described a series of unrelated transitive scenes that were manipulated for humanness (±human) in the agent and patient roles while their eye movements were recorded. Speakers produced a large range of word orders, consistent with the language having flexible word order, with variation significantly influenced by agent and patient humanness. An analysis of eye movements showed that Murrinhpatha speakers' first fixation on an event character did not alone determine word order; rather, early in speech planning participants rapidly encoded both event characters and their relationship to each other. That is, they engaged in relational encoding, laying down a very early conceptual foundation for the word order they eventually produced. These results support a weakly hierarchical account of sentence production and show that speakers of a free word order language encode the relationships between event participants during earlier stages of sentence planning than is typically observed for languages with fixed word orders.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Obleser, J., Eisner, F., & Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32), 8116-8124. doi:doi:10.1523/JNEUROSCI.1290-08.2008.

    Abstract

    Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.
  • Ohlerth, A.-K., Bastiaanse, R., Nickels, L., Neu, B., Zhang, W., Ille, S., Sollmann, N., & Krieg, S. M. (2022). Dual-task nTMS mapping to visualize the cortico-subcortical language network and capture postoperative outcome—A patient series in neurosurgery. Frontiers in Oncology, 11: 788122. doi:10.3389/fonc.2021.788122.

    Abstract

    Background: Perioperative assessment of language function in brain tumor patients commonly relies on administration of object naming during stimulation mapping. Ample research, however, points to the benefit of adding verb tasks to the testing paradigm in order to delineate and preserve postoperative language function more comprehensively. This research uses a case series approach to explore the feasibility and added value of a dual-task protocol that includes both a noun task (object naming) and a verb task (action naming) in perioperative delineation of language functions.

    Materials and Methods: Seven neurosurgical cases underwent perioperative language assessment with both object and action naming. This entailed preoperative baseline testing, preoperative stimulation mapping with navigated Transcranial Magnetic Stimulation (nTMS) with subsequent white matter visualization, intraoperative mapping with Direct Electrical Stimulation (DES) in 4 cases, and postoperative imaging and examination of language change.

    Results: We observed a divergent pattern of language organization and decline between cases who showed lesions close to the delineated language network and hence underwent DES mapping, and those that did not. The latter displayed no new impairment postoperatively consistent with an unharmed network for the neural circuits of both object and action naming. For the cases who underwent DES, on the other hand, a higher sensitivity was found for action naming over object naming. Firstly, action naming preferentially predicted the overall language state compared to aphasia batteries. Secondly, it more accurately predicted intraoperative positive language areas as revealed by DES. Thirdly, double dissociations between postoperatively unimpaired object naming and impaired action naming and vice versa indicate segregated skills and neural representation for noun versus verb processing, especially in the ventral stream. Overlaying postoperative imaging with object and action naming networks revealed that dual-task nTMS mapping can explain the drop in performance in those cases where the network appeared in proximity to the resection cavity.

    Conclusion: Using a dual-task protocol for visualization of cortical and subcortical language areas through nTMS mapping proved to be able to capture network-to-deficit relations in our case series. Ultimately, adding action naming to clinical nTMS and DES mapping may help prevent postoperative deficits of this seemingly segregated skill.

    Additional information

    table 1 and table 2
  • Okbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J. and 18 moreOkbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J., Freese, J., Herd, P., 23andMe Research Team, Social Science Genetic Association Consortium, Watson, C., Jala, J., Conley, D., Koellinger, P. D., Johannesson, M., Laibson, D., Meyer, M. N., Lee, J. J., Kong, A., Yengo, L., Cesarini, D., Turley, P., Visscher, P. M., Beauchamp, J. P., Benjamin, D. J., & Young, A. I. (2022). Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals. Nature Genetics, 54, 437-449. doi:10.1038/s41588-022-01016-z.

    Abstract

    We conduct a genome-wide association study (GWAS) of educational attainment (EA) in a sample of ~3 million individuals and identify 3,952 approximately uncorrelated genome-wide-significant single-nucleotide polymorphisms (SNPs). A genome-wide polygenic predictor, or polygenic index (PGI), explains 12–16% of EA variance and contributes to risk prediction for ten diseases. Direct effects (i.e., controlling for parental PGIs) explain roughly half the PGI’s magnitude of association with EA and other phenotypes. The correlation between mate-pair PGIs is far too large to be consistent with phenotypic assortment alone, implying additional assortment on PGI-associated factors. In an additional GWAS of dominance deviations from the additive model, we identify no genome-wide-significant SNPs, and a separate X-chromosome additive GWAS identifies 57.

    Additional information

    supplementary information
  • O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M. O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M., Jungmann, R., Bagni, C., Borner, G. H. H., Robertson, S. P., Hauck, S. M., & Götz, M. (2022). Spatial centrosome proteome of human neural cells uncovers disease-relevant heterogeneity. Science, 376(6599): eabf9088. doi:10.1126/science.abf9088.

    Abstract

    The centrosome provides an intracellular anchor for the cytoskeleton, regulating cell division, cell migration, and cilia formation. We used spatial proteomics to elucidate protein interaction networks at the centrosome of human induced pluripotent stem cell–derived neural stem cells (NSCs) and neurons. Centrosome-associated proteins were largely cell type–specific, with protein hubs involved in RNA dynamics. Analysis of neurodevelopmental disease cohorts identified a significant overrepresentation of NSC centrosome proteins with variants in patients with periventricular heterotopia (PH). Expressing the PH-associated mutant pre-mRNA-processing factor 6 (PRPF6) reproduced the periventricular misplacement in the developing mouse brain, highlighting missplicing of transcripts of a microtubule-associated kinase with centrosomal location as essential for the phenotype. Collectively, cell type–specific centrosome interactomes explain how genetic variants in ubiquitous proteins may convey brain-specific phenotypes.
  • Onnis, L., Lim, A., Cheung, S., & Huettig, F. (2022). Is the mind inherently predicting? Exploring forward and backward looking in language processing. Cognitive Science, 46(10): e13201. doi:10.1111/cogs.13201.

    Abstract

    Prediction is one characteristic of the human mind. But what does it mean to say the mind is a ’prediction machine’ and inherently forward looking as is frequently claimed? In natural languages, many contexts are not easily predictable in a forward fashion. In English for example many frequent verbs do not carry unique meaning on their own, but instead rely on another word or words that follow them to become meaningful. Upon reading take a the processor often cannot easily predict walk as the next word. But the system can ‘look back’ and integrate walk more easily when it follows take a (e.g., as opposed to make|get|have a walk). In the present paper we provide further evidence for the importance of both forward and backward looking in language processing. In two self-paced reading tasks and an eye-tracking reading task, we found evidence that adult English native speakers’ sensitivity to word forward and backward conditional probability significantly explained variance in reading times over and above psycholinguistic predictors of reading latencies. We conclude that both forward and backward-looking (prediction and integration) appear to be important characteristics of language processing. Our results thus suggest that it makes just as much sense to call the mind an ’integration machine’ which is inherently backward looking.

    Additional information

    Open Data and Open Materials
  • Oswald, J. N., Van Cise, A. M., Dassow, A., Elliott, T., Johnson, M. T., Ravignani, A., & Podos, J. (2022). A collection of best practices for the collection and analysis of bioacoustic data. Applied Sciences, 12(23): 12046. doi:10.3390/app122312046.

    Abstract

    The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of species’ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of “dos” and “don’ts”, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the ‘who’, ‘where’ and ‘how many’ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predator–prey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas we’ve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Owoyele, B., Trujillo, J. P., De Melo, G., & Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX, 20: 101236. doi:10.1016/j.softx.2022.101236.

    Abstract

    In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy. Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited. In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate. Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio–visual​ research. Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research. The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.

    Additional information

    setup and usage
  • Ozker, M., Doyle, W., Devinsky, O., & Flinker, A. (2022). A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biology, 20: e3001493. doi:10.1371/journal.pbio.3001493.

    Abstract

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

    Additional information

    data and code
  • Ozturk, O., & Papafragou, A. (2008). Acquisition of evidentiality and source monitoring. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 368-377). Somerville, Mass.: Cascadilla Press.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Park, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F. and 66 morePark, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F., Langner, S., Rummel, C., Rebsamen, M., Wiest, R., Martin, P., Kotikalapudi, R., Bender, B., O’Brien, T. J., Law, M., Sinclair, B., Vivash, L., Desmond, P. M., Malpas, C. B., Lui, E., Alhusaini, S., Doherty, C. P., Cavalleri, G. L., Delanty, N., Kälviäinen, R., Jackson, G. D., Kowalczyk, M., Mascalchi, M., Semmelroch, M., Thomas, R. H., Soltanian-Zadeh, H., Davoodi-Bojd, E., Zhang, J., Lenge, M., Guerrini, R., Bartolini, E., Hamandi, K., Foley, S., Weber, B., Depondt, C., Absil, J., Carr, S. J. A., Abela, E., Richardson, M. P., Devinsky, O., Severino, M., Striano, P., Parodi, C., Tortora, D., Hatton, S. N., Vos, S. B., Duncan, J. S., Galovic, M., Whelan, C. D., Bargalló, N., Pariente, J., Conde, E., Vaudano, A. E., Tondelli, M., Meletti, S., Kong, X., Francks, C., Fisher, S. E., Caldairou, B., Ryten, M., Labate, A., Sisodiya, S. M., Thompson, P. M., McDonald, C. R., Bernasconi, A., Bernasconi, N., & Bernhardt, B. C. (2022). Topographic divergence of atypical cortical asymmetry and atrophy patterns in temporal lobe epilepsy. Brain, 145(4), 1285-1298. doi:10.1093/brain/awab417.

    Abstract

    Temporal lobe epilepsy (TLE), a common drug-resistant epilepsy in adults, is primarily a limbic network disorder associated with predominant unilateral hippocampal pathology. Structural MRI has provided an in vivo window into whole-brain grey matter structural alterations in TLE relative to controls, by either mapping (i) atypical inter-hemispheric asymmetry or (ii) regional atrophy. However, similarities and differences of both atypical asymmetry and regional atrophy measures have not been systematically investigated.

    Here, we addressed this gap using the multi-site ENIGMA-Epilepsy dataset comprising MRI brain morphological measures in 732 TLE patients and 1,418 healthy controls. We compared spatial distributions of grey matter asymmetry and atrophy in TLE, contextualized their topographies relative to spatial gradients in cortical microstructure and functional connectivity calculated using 207 healthy controls obtained from Human Connectome Project and an independent dataset containing 23 TLE patients and 53 healthy controls, and examined clinical associations using machine learning.

    We identified a marked divergence in the spatial distribution of atypical inter-hemispheric asymmetry and regional atrophy mapping. The former revealed a temporo-limbic disease signature while the latter showed diffuse and bilateral patterns. Our findings were robust across individual sites and patients. Cortical atrophy was significantly correlated with disease duration and age at seizure onset, while degrees of asymmetry did not show a significant relationship to these clinical variables.

    Our findings highlight that the mapping of atypical inter-hemispheric asymmetry and regional atrophy tap into two complementary aspects of TLE-related pathology, with the former revealing primary substrates in ipsilateral limbic circuits and the latter capturing bilateral disease effects. These findings refine our notion of the neuropathology of TLE and may inform future discovery and validation of complementary MRI biomarkers in TLE.

    Additional information

    awab417_supplementary_data.pdf
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Pearson, L., & Pouw, W. (2022). Gesture–vocal coupling in Karnatak music performance: A neuro–bodily distributed aesthetic entanglement. Annals of the New York Academy of Sciences, 1515(1), 219-236. doi:10.1111/nyas.14806.

    Abstract

    In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists’ (enculturated) performance goals. Gesture–vocal coupling should, therefore, be viewed as a neuro–bodily distributed aesthetic entanglement.

    Additional information

    tables
  • Pereira Soares, S. M., Kupisch, T., & Rothman, J. (2022). Testing potential transfer effects in heritage and adult L2 bilinguals acquiring a mini grammar as an additional language: An ERP approach. Brain Sciences, 12: 669. doi:10.3390/brainsci12050669.

    Abstract

    Models on L3/Ln acquisition differ with respect to how they envisage degree (holistic
    vs. selective transfer of the L1, L2 or both) and/or timing (initial stages vs. development) of how
    the influence of source languages unfolds. This study uses EEG/ERPs to examine these models,
    bringing together two types of bilinguals: heritage speakers (HSs) (Italian-German, n = 15) compared
    to adult L2 learners (L1 German, L2 English, n = 28) learning L3/Ln Latin. Participants were trained
    on a selected Latin lexicon over two sessions and, afterward, on two grammatical properties: case
    (similar between German and Latin) and adjective–noun order (similar between Italian and Latin).
    Neurophysiological findings show an N200/N400 deflection for the HSs in case morphology and a
    P600 effect for the German L2 group in adjectival position. None of the current L3/Ln models predict
    the observed results, which questions the appropriateness of this methodology. Nevertheless, the
    results are illustrative of differences in how HSs and L2 learners approach the very initial stages of
    additional language learning, the implications of which are discussed
  • Pereira Soares, S. M., Prystauka, Y., DeLuca, V., & Rothman, J. (2022). Type of bilingualism conditions individual differences in the oscillatory dynamics of inhibitory control. Frontiers in Human Neuroscience, 16: 910910. doi:10.3389/fnhum.2022.910910.

    Abstract

    The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.

    Additional information

    supplementary material
  • Perfors, A., & Kidd, E. (2022). The role of stimulus‐specific perceptual fluency in statistical learning. Cognitive Science, 46(2): e13100. doi:10.1111/cogs.13100.

    Abstract

    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity.
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions in Kata Kolok (Bali). In Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions: Introduction and overview. In Possessive and existential constructions in sign languages (pp. 1-31). Nijmegen: Ishara Press.
  • Petersson, K. M. (2008). On cognition, structured sequence processing, and adaptive dynamical systems. American Institute of Physics Conference Proceedings, 1060(1), 195-200.

    Abstract

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.
  • Poletiek, F. H. (2008). Het probleem van escalerende beschuldigingen [Boekbespreking van Kindermishandeling door H. Crombag en den Hartog]. Maandblad voor Geestelijke Volksgezondheid, (2), 163-166.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2022). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology. Advance online publication. doi:10.1080/1612197X.2022.2058054.

    Abstract

    Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Pouw, W., & Holler, J. (2022). Timing in conversation is dynamically adjusted turn by turn in dyadic telephone conversations. Cognition, 222: 105015. doi:10.1016/j.cognition.2022.105015.

    Abstract

    Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.
  • Pouw, W., & Dixon, J. A. (2022). What you hear and see specifies the perception of a limb-respiratory-vocal act. Proceedings of the Royal Society B: Biological Sciences, 289(1979): 20221026. doi:10.1098/rspb.2022.1026.
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2022). The importance of visual control and biomechanics in the regulation of gesture-speech synchrony for an individual deprived of proprioceptive feedback of body position. Scientific Reports, 12: 14775. doi:10.1038/s41598-022-18300-x.

    Abstract

    Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW’s gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW’s reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW’s gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.

    Additional information

    supplementary tables
  • Pouw, W., & Fuchs, S. (2022). Origins of vocal-entangled gesture. Neuroscience and Biobehavioral Reviews, 141: 104836. doi:10.1016/j.neubiorev.2022.104836.

    Abstract

    Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory–vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal–motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Price, K. M., Wigg, K. G., Eising, E., Feng, Y., Blokland, K., Wilkinson, M., Kerr, E. N., Guger, S. L., Quantitative Trait Working Group of the GenLang Consortium, Fisher, S. E., Lovett, M. W., Strug, L. J., & Barr, C. L. (2022). Hypothesis-driven genome-wide association studies provide novel insights into genetics of reading disabilities. Translational Psychiatry, 12: 495. doi:10.1038/s41398-022-02250-z.

    Abstract

    Reading Disability (RD) is often characterized by difficulties in the phonology of the language. While the molecular mechanisms underlying it are largely undetermined, loci are being revealed by genome-wide association studies (GWAS). In a previous GWAS for word reading (Price, 2020), we observed that top single-nucleotide polymorphisms (SNPs) were located near to or in genes involved in neuronal migration/axon guidance (NM/AG) or loci implicated in autism spectrum disorder (ASD). A prominent theory of RD etiology posits that it involves disturbed neuronal migration, while potential links between RD-ASD have not been extensively investigated. To improve power to identify associated loci, we up-weighted variants involved in NM/AG or ASD, separately, and performed a new Hypothesis-Driven (HD)–GWAS. The approach was applied to a Toronto RD sample and a meta-analysis of the GenLang Consortium. For the Toronto sample (n = 624), no SNPs reached significance; however, by gene-set analysis, the joint contribution of ASD-related genes passed the threshold (p~1.45 × 10–2, threshold = 2.5 × 10–2). For the GenLang Cohort (n = 26,558), SNPs in DOCK7 and CDH4 showed significant association for the NM/AG hypothesis (sFDR q = 1.02 × 10–2). To make the GenLang dataset more similar to Toronto, we repeated the analysis restricting to samples selected for reading/language deficits (n = 4152). In this GenLang selected subset, we found significant association for a locus intergenic between BTG3-C21orf91 for both hypotheses (sFDR q < 9.00 × 10–4). This study contributes candidate loci to the genetics of word reading. Data also suggest that, although different variants may be involved, alleles implicated in ASD risk may be found in the same genes as those implicated in word reading. This finding is limited to the Toronto sample suggesting that ascertainment influences genetic associations.
  • Proios, H., Asaridou, S. S., & Brugger, P. (2008). Random number generation in patients with aphasia: A test of executive functions. Acta Neuropsychologica, 6(2), 157-168.

    Abstract

    Randomization performance was studied using the "Mental Dice Task" in 20 patients with aphasia (APH) and 101 elderly normal control subjects (NC). The produced sequences were compared to 100 computer-generated pseudorandom sequences with respect to 7 measures of sequential bias. The performance of APH differed significantly from NC participants, according to all but one measure, i.e. Turning Point Index (points of change between ascending and descending sequences). NC participants differed significantly from the computer generated sequences, according to all measures of randomness. Finally, APH differed significantly from the computer simulator, according to all measures but mean Repetition Gap score (gap between a digit and its reoccurrence). Despite the heterogeneity of our APH group, there were no significant differences in randomization performance between patients with different language impairments. All the APH displayed a distinct performance profile, with more response stereotypy, counting tendencies, and inhibition problems, as hypothesised, while at the same time responding more randomly than NC by showing less of a cycling strategy and more number repetitions.
  • Rapold, C. J., & Widlok, T. (2008). Dimensions of variability in Northern Khoekhoe language and culture. Southern African Humanities, 20, 133-161. Retrieved from http://www.sahumanities.org.za/RapoldWidlok_203.aspx.

    Abstract

    This article takes an interdisciplinary route towards explaining the complex history of Hai//om culture and language. We begin this article with a short review of ideas relating to 'origins' and historical reconstructions as they are currently played out among Khoekhoe groups in Namibia, in particular with regard to the Hai//om. We then take a comparative look at parts of the kinship system and the tonology of ≠Âkhoe Hai//om and other variants of Khoekhoe. With regard to the kinship and naming system, we see patterns that show similarities with Nama and Damara on the one hand but also with 'San' groups on the other hand. With regard to tonology, new data from three northern Khoekoe varieties shows similarities as well as differences with Standard Namibian Khoekhoe and Ju and Tuu varieties. The historical scenarios that might explain these facts suggest different centres of innovations and opposite directions of diffusion. The anthropological and linguistic data demonstrates that only a fine-grained and multi-layered approach that goes far beyond any simplistic dichotomies can do justice to the Hai//om riddle.
  • Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.

    Abstract

    How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.

    Additional information

    Data and analysis scripts
  • Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.

    Abstract

    When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures.
  • Ravignani, A., & Garcia, M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377: 20200394. doi:10.1098/rstb.2020.0394.

    Abstract

    Vocal production learning (VPL) is the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalizations. A parallel strand of research investigates acoustic allometry, namely how information about body size is conveyed by acoustic signals. Recently, we proposed that deviation from acoustic allometry principles as a result of sexual selection may have been an intermediate step towards the evolution of vocal learning abilities in mammals. Adopting a more hypothesis-neutral stance, here we perform phylogenetic regressions and other analyses further testing a potential link between VPL and being an allometric outlier. We find that multiple species belonging to VPL clades deviate from allometric scaling but in the opposite direction to that expected from size exaggeration mechanisms. In other words, our correlational approach finds an association between VPL and being an allometric outlier. However, the direction of this association, contra our original hypothesis, may indicate that VPL did not necessarily emerge via sexual selection for size exaggeration: VPL clades show higher vocalization frequencies than expected. In addition, our approach allows us to identify species with potential for VPL abilities: we hypothesize that those outliers from acoustic allometry lying above the regression line may be VPL species. Our results may help better understand the cross-species diversity, variability and aetiology of VPL, which among other things is a key underpinning of speech in our species.

    This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.

    Additional information

    Raw data Supplementary material
  • Ravignani, A., Asano, R., Valente, D., Ferretti, F., Hartmann, S., Hayashi, M., Jadoul, Y., Martins, M., Oseki, Y., Rodrigues, E. D., Vasileva, O., & Wacewicz, S. (Eds.). (2022). The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE). Nijmegen: Joint Conference on Language Evolution (JCoLE). doi:10.17617/2.3398549.
  • Ravignani, A. (2022). Language evolution: Sound meets gesture? [Review of the book From signal to symbol: The evolution of language by By R. Planer and K. Sterelny]. Evolutionary Anthropology, 31, 317-318. doi:10.1002/evan.21961.
  • Raviv, L., Lupyan, G., & Green, S. C. (2022). How variability shapes learning and generalization. Trends in Cognitive Sciences, 26(6), 462-483. doi:10.1016/j.tics.2022.03.007.

    Abstract

    Learning is using past experiences to inform new behaviors and actions. Because all experiences are unique, learning always requires some generalization. An effective way of improving generalization is to expose learners to more variable (and thus often more representative) input. More variability tends to make initial learning more challenging, but eventually leads to more general and robust performance. This core principle has been repeatedly rediscovered and renamed in different domains (e.g., contextual diversity, desirable difficulties, variability of practice). Reviewing this basic result as it has been formulated in different domains allows us to identify key patterns, distinguish between different kinds of variability, discuss the roles of varying task-relevant versus irrelevant dimensions, and examine the effects of introducing variability at different points in training.
  • Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2022). Elephants as a new animal model for studying the evolution of language as a result of self-domestication. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 606-608). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Raviv, L., Peckre, L. R., & Boeckx, C. (2022). What is simple is actually quite complex: A critical note on terminology in the domain of language and communication. Journal of Comparative Psychology, 136(4), 215-220. doi:10.1037/com0000328.

    Abstract

    On the surface, the fields of animal communication and human linguistics have arrived at conflicting theories and conclusions with respect to the effect of social complexity on communicative complexity. For example, an increase in group size is argued to have opposite consequences on human versus animal communication systems: although an increase in human community size leads to some types of language simplification, an increase in animal group size leads to an increase in signal complexity. But do human and animal communication systems really show such a fundamental discrepancy? Our key message is that the tension between these two adjacent fields is the result of (a) a focus on different levels of analysis (namely, signal variation or grammar-like rules) and (b) an inconsistent use of terminology (namely, the terms “simple” and “complex”). By disentangling and clarifying these terms with respect to different measures of communicative complexity, we show that although animal and human communication systems indeed show some contradictory effects with respect to signal variability, they actually display essentially the same patterns with respect to grammar-like structure. This is despite the fact that the definitions of complexity and simplicity are actually aligned for signal variability, but diverge for grammatical structure. We conclude by advocating for the use of more objective and descriptive terms instead of terms such as “complexity,” which can be applied uniformly for human and animal communication systems—leading to comparable descriptions of findings across species and promoting a more productive dialogue between fields.
  • Razafindrazaka, H., & Brucato, N. (2008). Esclavage et diaspora Africaine. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 326-328). Issy-les-Moulineaux: Elsevier Masson.
  • Razafindrazaka, H., Brucato, N., & Mazières, S. (2008). Les Noirs marrons. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 319-320). Issy-les-Moulineaux: Elsevier Masson.
  • Redl, T., Szuba, A., de Swart, P., Frank, S. L., & de Hoop, H. (2022). Masculine generic pronouns as a gender cue in generic statements. Discourse Processes, 59, 828-845. doi:10.1080/0163853X.2022.2148071.

    Abstract

    An eye-tracking experiment was conducted with speakers of Dutch (N = 84, 36 male), a language that falls between grammatical and natural-gender languages. We tested whether a masculine generic pronoun causes a male bias when used in generic statements—that is, in the absence of a specific referent. We tested two types of generic statements by varying conceptual number, hypothesizing that the pronoun zijn “his” was more likely to cause a male bias with a conceptually singular than a conceptually plural ante-cedent (e.g., Someone (conceptually singular)/Everyone (conceptually plural) with perfect pitch can tune his instrument quickly). We found male participants to exhibit a male bias but with the conceptually singular antecedent only. Female participants showed no signs of a male bias. The results show that the generically intended masculine pronoun zijn “his” leads to a male bias in conceptually singular generic contexts but that this further depends on participant gender.

    Additional information

    Data availability
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). The strength of stress-related lexical competition depends on the presence of first-syllable stress. In Proceedings of Interspeech 2008 (pp. 1954-1954).

    Abstract

    Dutch listeners' looks to printed words were tracked while they listened to instructions to click with their mouse on one of them. When presented with targets from word pairs where the first two syllables were segmentally identical but differed in stress location, listeners used stress information to recognize the target before segmental information disambiguated the words. Furthermore, the amount of lexical competition was influenced by the presence or absence of word-initial stress.
  • Reinisch, E., & Bosker, H. R. (2022). Encoding speech rate in challenging listening conditions: White noise and reverberation. Attention, Perception & Psychophysics, 84, 2303 -2318. doi:10.3758/s13414-022-02554-8.

    Abstract

    Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context
    sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often
    referred to as “rate-dependent speech perception,” has been suggested to be the result of a robust, low-level perceptual process,
    typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening condi-
    tions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to
    a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting
    temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the
    context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in
    Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which
    compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was
    found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations,
    eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study
    contributes towards understanding the consequences of different types of listening environments on the functioning of low-
    level perceptual processes that listeners use during speech perception.

    Additional information

    Data availability
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Lexical stress information modulates the time-course of spoken-word recognition. In Proceedings of Acoustics' 08 (pp. 3183-3188).

    Abstract

    Segmental as well as suprasegmental information is used by Dutch listeners to recognize words. The time-course of the effect of suprasegmental stress information on spoken-word recognition was investigated in a previous study, in which we tracked Dutch listeners' looks to arrays of four printed words as they listened to spoken sentences. Each target was displayed along with a competitor that did not differ segmentally in its first two syllables but differed in stress placement (e.g., 'CENtimeter' and 'sentiMENT'). The listeners' eye-movements showed that stress information is used to recognize the target before distinct segmental information is available. Here, we examine the role of durational information in this effect. Two experiments showed that initial-syllable duration, as a cue to lexical stress, is not interpreted dependent on the speaking rate of the preceding carrier sentence. This still held when other stress cues like pitch and amplitude were removed. Rather, the speaking rate of the preceding carrier affected the speed of word recognition globally, even though the rate of the target itself was not altered. Stress information modulated lexical competition, but did so independently of the rate of the preceding carrier, even if duration was the only stress cue present.
  • de Reus, K., Carlson, D., Lowry, A., Gross, S., Garcia, M., Rubio-García, A., Salazar-Casals, A., & Ravignani, A. (2022). Body size predicts vocal tract size in a mammalian vocal learner. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 154-156). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • de Reus, K., Carlson, D., Lowry, A., Gross, S., Garcia, M., Rubio-Garcia, A., Salazar-Casals, A., & Ravignani, A. (2022). Vocal tract allometry in a mammalian vocal learner. Journal of Experimental Biology, 225(8): jeb243766. doi:10.1242/jeb.243766.

    Abstract

    Acoustic allometry occurs when features of animal vocalisations can be predicted from body size measurements. Despite this being considered the norm, allometry sometimes breaks, resulting in species sounding smaller or larger than expected. A recent hypothesis suggests that allometry-breaking animals cluster into two groups: those with anatomical adaptations to their vocal tracts and those capable of learning new sounds (vocal learners). Here we test this hypothesis by probing vocal tract allometry in a proven mammalian vocal learner, the harbour seal (Phoca vitulina). We test whether vocal tract structures and body size scale allometrically in 68 individuals. We find that both body length and body weight accurately predict vocal tract length and one tracheal dimension. Independently, body length predicts vocal fold length while body weight predicts a second tracheal dimension. All vocal tract measures are larger in weaners than in pups and some structures are sexually dimorphic within age classes. We conclude that harbour seals do comply with allometric constraints, lending support to our hypothesis. However, allometry between body size and vocal fold length seems to emerge after puppyhood, suggesting that ontogeny may modulate the anatomy-learning distinction previously hypothesised as clear-cut. Species capable of producing non-allometric signals while their vocal tract scales allometrically, like seals, may then use non-morphological allometry-breaking mechanisms. We suggest that seals, and potentially other vocal learning mammals, may achieve allometry-breaking through developed neural control over their vocal organs.
  • Rinker, T., Papadopoulou, D., Ávila-Varela, D., Bosch, J., Castro, S., Olioumtsevits, K., Pereira Soares, S. M., Wodniecka, Z., & Marinis, T. (2022). Does multilingualism bring benefits?: What do teachers think about multilingualism? The Multilingual Mind: Policy Reports 2022, 3. doi:10.48787/kops/352-2-1m7py02eqd0b56.
  • Roberts, L., Gullberg, M., & Indefrey, P. (2008). Online pronoun resolution in L2 discourse: L1 influence and general learner effects. Studies in Second Language Acquisition, 30(3), 333-357. doi:10.1017/S0272263108080480.

    Abstract

    This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 57-61). Oxford: Blackwell.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. Language Learning, 58(suppl. 1), 57-61. doi:10.1111/j.1467-9922.2008.00461.x.
  • Roberts, L., Myles, F., & David, A. (Eds.). (2008). EUROSLA Yearbook 8. Amsterdam: John Benjamins.
  • Robotham, L., Trinkler, I., & Sauter, D. (2008). The power of positives: Evidence for an overall emotional recognition deficit in Huntington's disease [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 79, A12.

    Abstract

    The recognition of emotions of disgust, anger and fear have been shown to be significantly impaired in Huntington’s disease (eg,Sprengelmeyer et al, 1997, 2006; Gray et al, 1997; Milders et al, 2003,Montagne et al, 2006; Johnson et al, 2007; De Gelder et al, 2008). The relative impairment of these emotions might have implied a recognition impairment specific to negative emotions. Could the asymmetric recognition deficits be due not to the complexity of the emotion but rather reflect the complexity of the task? In the current study, 15 Huntington’s patients and 16 control subjects were presented with negative and positive non-speech emotional vocalisations that were to be identified as anger, fear, sadness, disgust, achievement, pleasure and amusement in a forced-choice paradigm. This experiment more accurately matched the negative emotions with positive emotions in a homogeneous modality. The resulting dually impaired ability of Huntington’s patients to identify negative and positive non-speech emotional vocalisations correctly provides evidence for an overall emotional recognition deficit in the disease. These results indicate that previous findings of a specificity in emotional recognition deficits might instead be due to the limitations of the visual modality. Previous experiments may have found an effect of emotional specificy due to the presence of a single positive emotion, happiness, in the midst of multiple negative emotions. In contrast with the previous literature, the study presented here points to a global deficit in the recognition of emotional sounds.
  • Roby, A. C., & Kidd, E. (2008). The referential communication skills of children with imaginary companions. Developmental Science, 11(4), 531-40. doi:10.1111/j.1467-7687.2008.00699.x.

    Abstract

    he present study investigated the referential communication skills of children with imaginary companions (ICs). Twenty-two children with ICs aged between 4 and 6 years were compared to 22 children without ICs (NICs). The children were matched for age, gender, birth order, number of siblings, and parental education. All children completed the Test of Referential Commu- nication (Camaioni, Ercolani & Lloyd, 1995). The results showed that the children with ICs performed better than the children without ICs on the speaker component of the task. In particular, the IC children were better able to identify a specific referen t to their interlocutor than were the NIC children. Furthermore, the IC children described less redundant features of the target picture than did the NIC children. The children did not differ in the listening comprehension component of the task. Overall, the results suggest that the IC children had a better understanding of their interlocutor’s information requirements in convers ation. The role of pretend play in the development of communicative competence is discussed in light of these results.
  • Rohde, H., & Rubio-Fernández, P. (2022). Color interpretation is guided by informativity expectations, not by world knowledge about colors. Journal of Memory and Language, 127: 104371. doi:10.1016/j.jml.2022.104371.

    Abstract

    When people hear words for objects with prototypical colors (e.g., ‘banana’), they look at objects of the same color (e.g., lemon), suggesting a link in comprehension between objects and their prototypical colors. However, that link does not carry over to production: The experimental record also shows that when people speak, they tend to omit prototypical colors, using color adjectives when it is informative (e.g., when referring to clothes, which have no prototypical color). These findings yield an interesting prediction, which we tested here: while prior work shows that people look at yellow objects when hearing ‘banana’, they should look away from bananas when hearing ‘yellow’. The results of an offline sentence-completion task (N = 100) and an online eye-tracking task (N = 41) confirmed that when presented with truncated color descriptions (e.g., ‘Click on the yellow…’), people anticipate clothing items rather than stereotypical fruits. A corpus analysis ruled out the possibility that this association between color and clothing arises from simple context-free co-occurrence statistics. We conclude that comprehenders make linguistic predictions based not only on what they know about the world (e.g., which objects are yellow) but also on what speakers tend to say about the world (i.e., what content would be informative).

    Additional information

    supplementary data 1
  • Rojas-Berscia, L. M., Lehecka, T., Claassen, S. A., Peute, A. A. K., Escobedo, M. P., Escobedo, S. P., Tangoa, A. H., & Pizango, E. Y. (2022). Embedding in Shawi narrations: A quantitative analysis of embedding in a post-colonial Amazonian indigenous society. Language in Society, 51(3), 427-451. doi:10.1017/S0047404521000634.

    Abstract

    In this article, we provide the first quantitative account of the frequent use of embedding in Shawi, a Kawapanan language spoken in Peruvian Northwestern Amazonia. We collected a corpus of ninety-two Frog Stories (Mayer 1969) from three different field sites in 2015 and 2016. Using the glossed corpus as our data, we conducted a generalised mixed model analysis, where we predicted the use of embedding with several macrosocial variables, such as gender, age, and education level. We show that bilingualism (Amazonian Spanish-Shawi) and education, mostly restricted by complex gender differences in Shawi communities, play a significant role in the establishment of linguistic preferences in narration. Moreover, we argue that the use of embedding reflects the impact of the mestizo1 society from the nineteenth century until today in Santa Maria de Cahuapanas, reshaping not only Shawi demographics but also linguistic practices
  • Rothman, J., Bayram, F., DeLuca, V., Di Pisa, G., Duñabeitia, J. A., Gharibi, K., Hao, J., Kolb, N., Kubota, M., Kupisch, T., Laméris, T., Luque, A., Van Osch, B., Pereira Soares, S. M., Prystauka, Y., Tat, D., Tomić, A., Voits, T., & Wulff, S. (2022). Monolingual comparative normativity in bilingualism research is out of “control”: Arguments and alternatives. Applied Psycholinguistics, 44(3), 316-329. doi:10.1017/S0142716422000315.

    Abstract

    Herein, we contextualize, problematize, and offer some insights for moving beyond the problem of monolingual comparative normativity in (psycho) linguistic research on bilingualism. We argue that, in the vast majority of cases, juxtaposing (functional) monolinguals to bilinguals fails to offer what the comparison is supposedly intended to do: meet the standards of empirical control in line with the scientific method. Instead, the default nature of monolingual comparative normativity has historically contributed to inequalities in many facets of bilingualism research and continues to impede progress on multiple levels. Beyond framing our views on the matter, we offer some epistemological considerations and methodological alternatives to this standard practice that improve empirical rigor while fostering increased diversity, inclusivity, and equity in our field.
  • De Rover, M., Petersson, K. M., Van der Werf, S. P., Cools, A. R., Berger, H. J., & Fernández, G. (2008). Neural correlates of strategic memory retrieval: Differentiating between spatial-associative and temporal-associative strategies. Human Brain Mapping, 29, 1068-1079. doi:10.1002/hbm.20445.

    Abstract

    Remembering complex, multidimensional information typically requires strategic memory retrieval, during which information is structured, for instance by spatial- or temporal associations. Although brain regions involved in strategic memory retrieval in general have been identified, differences in retrieval operations related to distinct retrieval strategies are not well-understood. Thus, our aim was to identify brain regions whose activity is differentially involved in spatial-associative and temporal-associative retrieval. First, we showed that our behavioral paradigm probing memory for a set of object-location associations promoted the use of a spatial-associative structure following an encoding condition that provided multiple associations to neighboring objects (spatial-associative condition) and the use of a temporal- associative structure following another study condition that provided predominantly temporal associations between sequentially presented items (temporal-associative condition). Next, we used an adapted version of this paradigm for functional MRI, where we contrasted brain activity related to the recall of object-location associations that were either encoded in the spatial- or the temporal-associative condition. In addition to brain regions generally involved in recall, we found that activity in higher-order visual regions, including the fusiform gyrus, the lingual gyrus, and the cuneus, was relatively enhanced when subjects used a spatial-associative structure for retrieval. In contrast, activity in the globus pallidus and the thalamus was relatively enhanced when subjects used a temporal-associative structure for retrieval. In conclusion, we provide evidence for differential involvement of these brain regions related to different types of strategic memory retrieval and the neural structures described play a role in either spatial-associative or temporal-associative memory retrieval.
  • Rubio-Fernandez, P., Long, M., Shukla, V., Bhatia, V., & Sinha, P. (2022). Visual perspective taking is not automatic in a simplified Dot task: Evidence from newly sighted children, primary school children and adults. Neuropsychologia, 172: e0153485. doi:10.1016/j.neuropsychologia.2022.108256.

    Abstract

    In the Dot task, children and adults involuntarily compute an avatar’s visual perspective, which has been interpreted by some as automatic Theory of Mind. This interpretation has been challenged by other researchers arguing that the task reveals automatic attentional orienting. Here we tested a new interpretation of previous findings: the seemingly automatic processes revealed by the Dot task result from the high Executive Control demands of this verification paradigm, which taxes short-term memory and imposes perspective-switching costs. We tested this hypothesis in three experiments conducted in India with newly sighted children (Experiment 1; N = 5; all girls), neurotypical children (Experiment 2; ages 5–10; N = 90; 38 girls) and adults (Experiment 3; N = 30; 18 women) in a highly simplified version of the Dot task. No evidence of automatic perspective-taking was observed, although all groups revealed perspective-taking costs. A newly sighted child and the youngest children in our sample also showed an egocentric bias, which disappeared by age 10, confirming that visual perspective taking develops during the school years. We conclude that the standard Dot task imposes such methodological demands on both children and adults that the alleged evidence of automatic processes (either mindreading or domain general) may simply reveal limitations in Executive Control.

    Additional information

    1-s2.0-S0028393222001154-mmc1.docx
  • Rubio-Fernández, P., Shukla, V., Bhatia, V., Ben-Ami, S., & Sinha, P. (2022). Head turning is an effective cue for gaze following: Evidence from newly sighted individuals, school children and adults. Neuropsychologia, 174: 108330. doi:10.1016/j.neuropsychologia.2022.108330.

    Abstract

    In referential communication, gaze is often interpreted as a social cue that facilitates comprehension and enables word learning. Here we investigated the degree to which head turning facilitates gaze following. We presented participants with static pictures of a man looking at a target object in a first and third block of trials (pre- and post-intervention), while they saw short videos of the same man turning towards the target in the second block of trials (intervention). In Experiment 1, newly sighted individuals (treated for congenital cataracts; N = 8) benefited from the motion cues, both when comparing their initial performance with static gaze cues to their performance with dynamic head turning, and their performance with static cues before and after the videos. In Experiment 2, neurotypical school children (ages 5–10 years; N = 90) and adults (N = 30) also revealed improved performance with motion cues, although most participants had started to follow the static gaze cues before they saw the videos. Our results confirm that head turning is an effective social cue when interpreting new words, offering new insights for a pathways approach to development.
  • Rubio-Fernández, P., Wienholz, A., Ballard, C. M., Kirby, S., & Lieberman, A. M. (2022). Adjective position and referential efficiency in American Sign Language: Effects of adjective semantics, sign type and age of sign exposure. Journal of Memory and Language, 126: 104348. doi:10.1016/j.jml.2022.104348.

    Abstract

    Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.
  • Rubio-Fernández, P. (2008). Concept narrowing: The role of context-independent information. Journal of Biomedical Semantics, 25(4), 381-409. doi:10.1093/jos/ffn004.

    Abstract

    The present study aims to investigate the extent to which the process of lexical interpretation is context dependent. It has been uncontroversially agreed in psycholinguistics that interpretation is always affected by sentential context. The major debate in lexical processing research has revolved around the question of whether initial semantic activation is context sensitive or rather exhaustive, that is, whether the effect of context occurs before or only after the information associated to a concept has been accessed from the mental lexicon. However, within post-lexical access processes, the question of whether the selection of a word's meaning components is guided exclusively by contextual relevance, or whether certain meaning components might be selected context independently, has not been such an important focus of research. I have investigated this question in the two experiments reported in this paper and, moreover, have analysed the role that context-independent information in concepts might play in word interpretation. This analysis differs from previous studies on lexical processing in that it places experimental work in the context of a theoretical model of lexical pragmatics.
  • Rubio-Fernandez, P. (2022). Demonstrative systems: From linguistic typology to social cognition. Cognitive Psychology, 139: 101519. doi:10.1016/j.cogpsych.2022.101519.

    Abstract

    This study explores the connection between language and social cognition by empirically testing different typological analyses of various demonstrative systems. Linguistic typology classifies demonstrative systems as distance-oriented or person-oriented, depending on whether they indicate the location of a referent relative only to the speaker, or to both the speaker and the listener. From the perspective of social cognition, speakers of languages with person-oriented systems must monitor their listener’s spatial location in order to accurately use their demonstratives, while speakers of languages with distance-oriented systems can use demonstratives from their own, egocentric perspective. Resolving an ongoing controversy around the nature of the Spanish demonstrative system, the results of Experiment 1 confirmed that this demonstrative system is person oriented, while the English system is distance oriented. Experiment 2 revealed that not all three-way demonstrative systems are person oriented, with Japanese speakers showing sensitivity to the listener’s spatial location, while Turkish speakers did not show such an effect in their demonstrative choice. In Experiment 3, Catalan-Spanish bilinguals showed sensitivity to listener position in their choice of the Spanish distal form, but not in their choice of the medial form. These results were interpreted as a transfer effect from Catalan, which revealed analogous results to English. Experiment 4 investigated the use of demonstratives to redirect a listener’s attention to the intended referent, which is a universal function of demonstratives that also hinges on social cognition. Japanese and Spanish speakers chose between their proximal and distal demonstratives flexibly, depending on whether the listener was looking closer or further from the referent, whereas Turkish speakers chose their medial form for attention correction. In conclusion, the results of this study support the view that investigating how speakers of different languages jointly use language and social cognition in communication has the potential to unravel the deep connection between these two fundamentally human capacities.
  • De Rue, N. (2022). Phonological contrast and conflict in Dutch vowels: Neurobiological and psycholinguistic evidence from children and adults. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ruggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M. and 151 moreRuggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M., Berge, M. L., Bermaganbet, A., Bibilouri, K., Bjørndal, L. D., Black, S., Blomster Lyshol, J. K., Brik, T., Buabang, E. K., Burghart, M., Bursalıoğlu, A., Buzayu, N. M., Čadek, M., De Carvalho, N. M., Cazan, A.-M., Çetinçelik, M., Chai, V. E., Chen, P., Chen, S., Clay, G., D’Ambrogio, S., Damnjanović, K., Duffy, G., Dugue, T., Dwarkanath, T., Envuladu, E. A., Erceg, N., Esteban-Serna, C., Farahat, E., Farrokhnia, R. A., Fawad, M., Fedryansyah, M., Feng, D., Filippi, S., Fonollá, M. A., Freichel, R., Freira, L., Friedemann, M., Gao, Z., Ge, S., Geiger, S. J., George, L., Grabovski, I., Gracheva, A., Gracheva, A., Hajian, A., Hasan, N., Hecht, M., Hong, X., Hubená, B., Ikonomeas, A. G. F., Ilić, S., Izydorczyk, D., Jakob, L., Janssens, M., Jarke, H., Kácha, O., Kalinova, K. N., Kapingura, F. M., Karakasheva, R., Kasdan, D. O., Kemel, E., Khorrami, P., Krawiec, J. M., Lagidze, N., Lazarević, A., Lazić, A., Lee, H. S., Lep, Ž., Lins, S., Lofthus, I. S., Macchia, L., Mamede, S., Mamo, M. A., Maratkyzy, L., Mareva, S., Marwaha, S., McGill, L., McParland, S., Melnic, A., Meyer, S. A., Mizak, S., Mohammed, A., Mukhyshbayeva, A., Navajas, J., Neshevska, D., Niazi, S. J., Nieves, A. E. N., Nippold, F., Oberschulte, J., Otto, T., Pae, R., Panchelieva, T., Park, S. Y., Pascu, D. S., Pavlović, I., Petrović, M. B., Popović, D., Prinz, G. M., Rachev, N. R., Ranc, P., Razum, J., Rho, C. E., Riitsalu, L., Rocca, F., Rosenbaum, R. S., Rujimora, J., Rusyidi, B., Rutherford, C., Said, R., Sanguino, I., Sarikaya, A. K., Say, N., Schuck, J., Shiels, M., Shir, Y., Sievert, E. D. C., Soboleva, I., Solomonia, T., Soni, S., Soysal, I., Stablum, F., Sundström, F. T. A., Tang, X., Tavera, F., Taylor, J., Tebbe, A.-L., Thommesen, K. K., Tobias-Webb, J., Todsen, A. L., Toscano, F., Tran, T., Trinh, J., Turati, A., Ueda, K., Vacondio, M., Vakhitov, V., Valencia, A. J., Van Reyn, C., Venema, T. A. G., Verra, S. E., Vintr, J., Vranka, M. A., Wagner, L., Wu, X., Xing, K. Y., Xu, K., Xu, S., Yamada, Y., Yosifova, A., Zupan, Z., & García-Garzon, E. (2022). The globalizability of temporal discounting. Nature Human Behaviour, 6, 1386-1397. doi:10.1038/s41562-022-01392-w.

    Abstract

    Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns.
  • De Ruiter, J. P., & Levinson, S. C. (2008). A biological infrastructure for communication underlies the cultural evolution of languages [Commentary on Christiansen & Chater: Language as shaped by the brain]. Behavioral and Brain Sciences, 31(5), 518-518. doi:10.1017/S0140525X08005086.

    Abstract

    Universal Grammar (UG) is indeed evolutionarily implausible. But if languages are just “adapted” to a large primate brain, it is hard to see why other primates do not have complex languages. The answer is that humans have evolved a specialized and uniquely human cognitive architecture, whose main function is to compute mappings between arbitrary signals and communicative intentions. This underlies the development of language in the human species.
  • De Ruiter, L. E. (2008). How useful are polynomials for analyzing intonation? In Proceedings of Interspeech 2008 (pp. 785-789).

    Abstract

    This paper presents the first application of polynomial modeling as a means for validating phonological pitch accent labels to German data. It is compared to traditional phonetic analysis (measuring minima, maxima, alignment). The traditional method fares better in classification, but results are comparable in statistical accent pair testing. Robustness tests show that pitch correction is necessary in both cases. The approaches are discussed in terms of their practicability, applicability to other domains of research and interpretability of their results.
  • Sainburg, T., Mai, A., & Gentner, T. Q. (2022). Long-range sequential dependencies precede complex syntactic production in language acquisition. Proceedings of the Royal Society B: Biological Sciences, 289: 20212657. doi:10.1098/rspb.2021.2657.

    Abstract

    To convey meaning, human language relies on hierarchically organized, long-
    range relationships spanning words, phrases, sentences and discourse. As the
    distances between elements (e.g. phonemes, characters, words) in human
    language sequences increase, the strength of the long-range relationships
    between those elements decays following a power law. This power-law
    relationship has been attributed variously to long-range sequential organiz-
    ation present in human language syntax, semantics and discourse structure.
    However, non-linguistic behaviours in numerous phylogenetically distant
    species, ranging from humpback whale song to fruit fly motility, also demon-
    strate similar long-range statistical dependencies. Therefore, we hypothesized
    that long-range statistical dependencies in human speech may occur indepen-
    dently of linguistic structure. To test this hypothesis, we measured long-range
    dependencies in several speech corpora from children (aged 6 months–
    12 years). We find that adult-like power-law statistical dependencies are present
    in human vocalizations at the earliest detectable ages, prior to the production of
    complex linguistic structure. These linguistic structures cannot, therefore, be
    the sole cause of long-range statistical dependencies in language
  • Salazar-Casals, A., de Reus, K., Greskewitz, N., Havermans, J., Geut, M., Villanueva, S., & Rubio-Garcia, A. (2022). Increased incidence of entanglements and ingested marine debris in Dutch seals from 2010 to 2020. Oceans, 3(3), 389-400. doi:10.3390/oceans3030026.

    Abstract

    In recent decades, the amount of marine debris has increased in our oceans. As wildlife interactions with debris increase, so does the number of entangled animals, impairing normal behavior and potentially affecting the survival of these individuals. The current study summarizes data on two phocid species, harbor (Phoca vitulina) and gray seals (Halichoerus grypus), affected by marine debris in Dutch waters from 2010 to 2020. The findings indicate that the annual entanglement rate (13.2 entanglements/year) has quadrupled compared with previous studies. Young seals, particularly gray seals, are the most affected individuals, with most animals found or sighted with fishing nets wrapped around their necks. Interestingly, harbor seals showed a higher incidence of ingested debris. Species differences with regard to behavior, foraging strategies, and habitat preferences may explain these findings. The lack of consistency across reports suggests that it is important to standardize data collection from now on. Despite increased public awareness about the adverse environmental effects of marine debris, more initiatives and policies are needed to ensure the protection of the marine environment in the Netherlands.
  • Sauter, D., Eisner, F., Rosen, S., & Scott, S. K. (2008). The role of source and filter cues in emotion recognition in speech [Abstract]. Journal of the Acoustical Society of America, 123, 3739-3740.

    Abstract

    In the context of the source-filter theory of speech, it is well established that intelligibility is heavily reliant on information carried by the filter, that is, spectral cues (e.g., Faulkner et al., 2001; Shannon et al., 1995). However, the extraction of other types of information in the speech signal, such as emotion and identity, is less well understood. In this study we investigated the extent to which emotion recognition in speech depends on filterdependent cues, using a forced-choice emotion identification task at ten levels of noise-vocoding ranging between one and 32 channels. In addition, participants performed a speech intelligibility task with the same stimuli. Our results indicate that compared to speech intelligibility, emotion recognition relies less on spectral information and more on cues typically signaled by source variations, such as voice pitch, voice quality, and intensity. We suggest that, while the reliance on spectral dynamics is likely a unique aspect of human speech, greater phylogenetic continuity across species may be found in the communication of affect in vocalizations.
  • Sauter, D. (2008). The time-course of emotional voice processing [Abstract]. Neurocase, 14, 455-455.

    Abstract

    Research using event-related brain potentials (ERPs) has demonstrated an early differential effect in fronto-central regions when processing emotional, as compared to affectively neutral facial stimuli (e.g., Eimer & Holmes, 2002). In this talk, data demonstrating a similar effect in the auditory domain will be presented. ERPs were recorded in a one-back task where participants had to identify immediate repetitions of emotion category, such as a fearful sound followed by another fearful sound. The stimulus set consisted of non-verbal emotional vocalisations communicating positive and negative sounds, as well as neutral baseline conditions. Similarly to the facial domain, fear sounds as compared to acoustically controlled neutral sounds, elicited a frontally distributed positivity with an onset latency of about 150 ms after stimulus onset. These data suggest the existence of a rapid multi-modal frontocentral mechanism discriminating emotional from non-emotional human signals.
  • Scharenborg, O., & Cooke, M. P. (2008). Comparing human and machine recognition performance on a VCV corpus. In ISCA Tutorial and Research Workshop (ITRW) on "Speech Analysis and Processing for Knowledge Discovery".

    Abstract

    Listeners outperform ASR systems in every speech recognition task. However, what is not clear is where this human advantage originates. This paper investigates the role of acoustic feature representations. We test four (MFCCs, PLPs, Mel Filterbanks, Rate Maps) acoustic representations, with and without ‘pitch’ information, using the same backend. The results are compared with listener results at the level of articulatory feature classification. While no acoustic feature representation reached the levels of human performance, both MFCCs and Rate maps achieved good scores, with Rate maps nearing human performance on the classification of voicing. Comparing the results on the most difficult articulatory features to classify showed similarities between the humans and the SVMs: e.g., ‘dental’ was by far the least well identified by both groups. Overall, adding pitch information seemed to hamper classification performance.
  • Scharenborg, O. (2008). Modelling fine-phonetic detail in a computational model of word recognition. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1473-1476). ISCA Archive.

    Abstract

    There is now considerable evidence that fine-grained acoustic-phonetic detail in the speech signal helps listeners to segment a speech signal into syllables and words. In this paper, we compare two computational models of word recognition on their ability to capture and use this finephonetic detail during speech recognition. One model, SpeM, is phoneme-based, whereas the other, newly developed Fine- Tracker, is based on articulatory features. Simulations dealt with modelling the ability of listeners to distinguish short words (e.g., ‘ham’) from the longer words in which they are embedded (e.g., ‘hamster’). The simulations with Fine- Tracker showed that it was, like human listeners, able to distinguish between short words from the longer words in which they are embedded. This suggests that it is possible to extract this fine-phonetic detail from the speech signal and use it during word recognition.
  • Scheeringa, R., Bastiaansen, M. C. M., Petersson, K. M., Oostenveld, R., Norris, D. G., & Hagoort, P. (2008). Frontal theta EEG activity correlates negatively with the default mode network in resting state. International Journal of Psychophysiology, 67, 242-251. doi:10.1016/j.ijpsycho.2007.05.017.

    Abstract

    We used simultaneously recorded EEG and fMRI to investigate in which areas the BOLD signal correlates with frontal theta power changes, while subjects were quietly lying resting in the scanner with their eyes open. To obtain a reliable estimate of frontal theta power we applied ICA on band-pass filtered (2–9 Hz) EEG data. For each subject we selected the component that best matched the mid-frontal scalp topography associated with the frontal theta rhythm. We applied a time-frequency analysis on this component and used the time course of the frequency bin with the highest overall power to form a regressor that modeled spontaneous fluctuations in frontal theta power. No significant positive BOLD correlations with this regressor were observed. Extensive negative correlations were observed in the areas that together form the default mode network. We conclude that frontal theta activity can be seen as an EEG index of default mode network activity.
  • Schimke, S., Verhagen, J., & Dimroth, C. (2008). Particules additives et finitude en néerlandais et allemand L2: Étude expérimentale. Acquisition et Interaction en Language Etrangère, 26, 191-210.

    Abstract

    Cette étude traite de la question de savoir s’il y a une relation entre les équivalents des particules additives ‘aussi’ et ‘de nouveau’ portant sur le topique et la finitude dans la variété des apprenants turcophones du néerlandais et de l’allemand. Dans les données obtenues avec une tâche contrôlée, nous observons que la finitude est moins fréquemment marquée dans les énoncés contenant ces particules que les énoncés comparables qui ne contiennent pas ces particules. Ceci est vrai pour le marquage de la finitude sur les verbes lexicaux ainsi que pour la présence de verbes conjugués sans contenu lexical comme la copule. De plus, nous montrons que les particules peuvent précéder le verbe conjugué dans la langue des apprenants. Ces résultats peuvent être expliqués par la similarité fonctionnelle entre la finitude et les particules portant sur le topique.
  • Schlag, F., Allegrini, A. G., Buitelaar, J., Verhoef, E., Van Donkelaar, M. M. J., Plomin, R., Rimfeld, K., Fisher, S. E., & St Pourcain, B. (2022). Polygenic risk for mental disorder reveals distinct association profiles across social behaviour in the general population. Molecular Psychiatry, 27, 1588-1598. doi:10.1038/s41380-021-01419-0.

    Abstract

    Many mental health conditions present a spectrum of social difficulties that overlaps with social behaviour in the general population including shared but little characterised genetic links. Here, we systematically investigate heterogeneity in shared genetic liabilities with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), bipolar disorder (BP), major depression (MD) and schizophrenia across a spectrum of different social symptoms. Longitudinally assessed low-prosociality and peer-problem scores in two UK population-based cohorts (4–17 years; parent- and teacher-reports; Avon Longitudinal Study of Parents and Children(ALSPAC): N ≤ 6,174; Twins Early Development Study(TEDS): N ≤ 7,112) were regressed on polygenic risk scores for disorder, as informed by genome-wide summary statistics from large consortia, using negative binomial regression models. Across ALSPAC and TEDS, we replicated univariate polygenic associations between social behaviour and risk for ADHD, MD and schizophrenia. Modelling variation in univariate genetic effects jointly using random-effect meta-regression revealed evidence for polygenic links between social behaviour and ADHD, ASD, MD, and schizophrenia risk, but not BP. Differences in age, reporter and social trait captured 45–88% in univariate effect variation. Cross-disorder adjusted analyses demonstrated that age-related heterogeneity in univariate effects is shared across mental health conditions, while reporter- and social trait-specific heterogeneity captures disorder-specific profiles. In particular, ADHD, MD, and ASD polygenic risk were more strongly linked to peer problems than low prosociality, while schizophrenia was associated with low prosociality only. The identified association profiles suggest differences in the social genetic architecture across mental disorders when investigating polygenic overlap with population-based social symptoms spanning 13 years of child and adolescent development.
  • Schmidt, T., Duncan, S., Ehmer, O., Hoyt, J., Kipp, M., Loehr, D., Magnusson, M., Rose, T., & Sloetjes, H. (2008). An exchange format for multimodal annotations. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools

Share this page