Displaying 1 - 77 of 77
-
Akamine, S., Ghaleb, E., Rasenberg, M., Fernandez, R., Meyer, A. S., & Özyürek, A. (2024). Speakers align both their gestures and words not only to establish but also to maintain reference to create shared labels for novel objects in interaction. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2435-2442).Abstract
When we communicate with others, we often repeat aspects of each other's communicative behavior such as sentence structures and words. Such behavioral alignment has been mostly studied for speech or text. Yet, language use is mostly multimodal, flexibly using speech and gestures to convey messages. Here, we explore the use of alignment in speech (words) and co-speech gestures (iconic gestures) in a referential communication task aimed at finding labels for novel objects in interaction. In particular, we investigate how people flexibly use lexical and gestural alignment to create shared labels for novel objects and whether alignment in speech and gesture are related over time. The present study shows that interlocutors establish shared labels multimodally, and alignment in words and iconic gestures are used throughout the interaction. We also show that the amount of lexical alignment positively associates with the amount of gestural alignment over time, suggesting a close relationship between alignment in the vocal and manual modalities.Additional information
link to eScholarship -
Baths, V., Jartarkar, M., Sood, S., Lewis, A. G., Ostarek, M., & Huettig, F. (2024). Testing the involvement of low-level visual representations during spoken word processing with non-Western students and meditators practicing Sudarshan Kriya Yoga. Brain Research, 1838: 148993. doi:10.1016/j.brainres.2024.148993.
Abstract
Previous studies, using the Continuous Flash Suppression (CFS) paradigm, observed that (Western) university students are better able to detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Here we attempted to replicate this effect with non-Western university students in Goa (India). A second aim was to explore the performance of (non-Western) meditators practicing Sudarshan Kriya Yoga in Goa in the same task. Some previous literature suggests that meditators may excel in some tasks that tap visual attention, for example by exercising better endogenous and exogenous control of visual awareness than non-meditators. The present study replicated the finding that congruent spoken cue words lead to significantly higher detection sensitivity than incongruent cue words in non-Western university students. Our exploratory meditator group also showed this detection effect but both frequentist and Bayesian analyses suggest that the practice of meditation did not modulate it. Overall, our results provide further support for the notion that spoken words can activate low-level category-specific visual features that boost the basic capacity to detect the presence of a visual stimulus that has those features. Further research is required to conclusively test whether meditation can modulate visual detection abilities in CFS and similar tasks. -
Corps, R. E., & Pickering, M. (2024). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review, 31, 839-848. doi:10.3758/s13423-023-02382-3.
Abstract
To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.Additional information
Raw data, analysis code, and study materials are available here -
Corps, R. E., & Pickering, M. (2024). The role of answer content and length when preparing answers to questions. Scientific Reports, 14: 17110. doi:10.1038/s41598-024-68253-6.
Abstract
Research suggests that interlocutors manage the timing demands of conversation by preparing what they want to say early. In three experiments, we used a verbal question-answering task to investigate what aspects of their response speakers prepare early. In all three experiments, participants answered more quickly when the critical content (here, barks) necessary for answer preparation occurred early (e.g., Which animal barks and is also a common household pet?) rather than late (e.g., Which animal is a common household pet and also barks?). In the individual experiments, we found no convincing evidence that participants were slower to produce longer answers, consisting of multiple words, than shorter answers, consisting of a single word. There was also no interaction between these two factors. A combined analysis of the first two experiments confirmed this lack of interaction, and demonstrated that participants were faster to answer questions when the critical content was available early rather than late and when the answer was short rather than long. These findings provide tentative evidence for an account in which interlocutors prepare the content of their answer as soon as they can, but sometimes do not prepare its length (and thus form) until they are ready to speak.Additional information
supplementary tables -
Corps, R. E., & Meyer, A. S. (2024). The influence of familiarisation and item repetition on the name agreement effect in picture naming. Quarterly Journal of Experimental Psychology. Advance online publication. doi:10.1177/17470218241274661.
Abstract
Name agreement (NA) refers to the degree to which speakers agree on a picture’s name. A robust finding is that speakers are faster to name pictures with high agreement (HA) than those with low agreement (LA). This NA effect is thought to occur because LA pictures strongly activate several names, and so speakers need time to select one. HA pictures, in contrast, strongly activate a single name and so there is no need to select one name out of several alternatives. Recent models of lexical access suggest that the structure of the mental lexicon changes with experience. Thus, speakers should consider a range of names when naming LA pictures, but the extent to which they consider each of these names should change with experience. We tested these hypotheses in two picture-naming experiments. In Experiment 1, participants were faster to name LA than HA pictures when they named each picture once. Importantly, they were faster to produce modal names (provided by most participants) than alternative names for LA pictures, consistent with the view that speakers activate multiple names for LA pictures. In Experiment 2, participants were familiarised with the modal name before the experiment and named each picture three times. Although there was still an NA effect when participants named the pictures the first time, it was reduced in comparison to Experiment 1 and was further reduced with each picture repetition.Thus, familiarisation and repetition reduced the NA effect, but did not eliminate it, suggesting speakers activate a range of plausible names. -
Cos, F., Bujok, R., & Bosker, H. R. (2024). Test-retest reliability of audiovisual lexical stress perception after >1.5 years. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 871-875). doi:10.21437/SpeechProsody.2024-176.Abstract
In natural communication, we typically both see and hear our conversation partner. Speech comprehension thus requires the integration of auditory and visual information from the speech signal. This is for instance evidenced by the Manual McGurk effect, where the perception of lexical stress is biased towards the syllable that has a beat gesture aligned to it. However, there is considerable individual variation in how heavily gestural timing is weighed as a cue to stress. To assess within-individualconsistency, this study investigated the test-retest reliability of the Manual McGurk effect. We reran an earlier Manual McGurk experiment with the same participants, over 1.5 years later. At the group level, we successfully replicated the Manual McGurk effect with a similar effect size. However, a correlation of the by-participant effect sizes in the two identical experiments indicated that there was only a weak correlation between both tests, suggesting that the weighing of gestural information in the perception of lexical stress is stable at the group level, but less so in individuals. Findings are discussed in comparison to other measures of audiovisual integration in speech perception. Index Terms: Audiovisual integration, beat gestures, lexical stress, test-retest reliability -
Ekerdt, C., Menks, W. M., Fernández, G., McQueen, J. M., Takashima, A., & Janzen, G. (2024). White matter connectivity linked to novel word learning in children. Brain Structure & Function, 229, 2461-2477. doi:10.1007/s00429-024-02857-6.
Abstract
Children and adults are excellent word learners. Increasing evidence suggests that the neural mechanisms that allow us to learn words change with age. In a recent fMRI study from our group, several brain regions exhibited age-related differences when accessing newly learned words in a second language (L2; Takashima et al. Dev Cogn Neurosci 37, 2019). Namely, while the Teen group (aged 14–16 years) activated more left frontal and parietal regions, the Young group (aged 8–10 years) activated right frontal and parietal regions. In the current study we analyzed the structural connectivity data from the aforementioned study, examining the white matter connectivity of the regions that showed age-related functional activation differences. Age group differences in streamline density as well as correlations with L2 word learning success and their interaction were examined. The Teen group showed stronger connectivity than the Young group in the right arcuate fasciculus (AF). Furthermore, white matter connectivity and memory for L2 words across the two age groups correlated in the left AF and the right anterior thalamic radiation (ATR) such that higher connectivity in the left AF and lower connectivity in the right ATR was related to better memory for L2 words. Additionally, connectivity in the area of the right AF that exhibited age-related differences predicted word learning success. The finding that across the two age groups, stronger connectivity is related to better memory for words lends further support to the hypothesis that the prolonged maturation of the prefrontal cortex, here in the form of structural connectivity, plays an important role in the development of memory.Additional information
supplementary information -
Frances, C. (2024). Good enough processing: What have we learned in the 20 years since Ferreira et al. (2002)? Frontiers in Psychology, 15: 1323700. doi:10.3389/fpsyg.2024.1323700.
Abstract
Traditionally, language processing has been thought of in terms of complete processing of the input. In contrast to this, Ferreira and colleagues put forth the idea of good enough processing. The proposal was that during everyday processing, ambiguities remain unresolved, we rely on heuristics instead of full analyses, and we carry out deep processing only if we need to for the task at hand. This idea has gathered substantial traction since its conception. In the current work, I review the papers that have tested the three key claims of good enough processing: ambiguities remain unresolved and underspecified, we use heuristics to parse sentences, and deep processing is only carried out if required by the task. I find mixed evidence for these claims and conclude with an appeal to further refinement of the claims and predictions of the theory. -
He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology, 77(8), 1745-1769. doi:10.1177/17470218231219971.
Abstract
Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement. -
Giglio, L., Hagoort, P., & Ostarek, M. (2024). Neural encoding of semantic structures during sentence production. Cerebral Cortex, 34(12): bhae482. doi:10.1093/cercor/bhae482.
Abstract
The neural representations for compositional processing have so far been mostly studied during sentence comprehension. In an fMRI study of sentence production, we investigated the brain representations for compositional processing during speaking. We used a rapid serial visual presentation sentence recall paradigm to elicit sentence production from the conceptual memory of an event. With voxel-wise encoding models, we probed the specificity of the compositional structure built during the production of each sentence, comparing an unstructured model of word meaning without relational information with a model that encodes abstract thematic relations and a model encoding event-specific relational structure. Whole-brain analyses revealed that sentence meaning at different levels of specificity was encoded in a large left frontal-parietal-temporal network. A comparison with semantic structures composed during the comprehension of the same sentences showed similarly distributed brain activity patterns. An ROI analysis over left fronto-temporal language parcels showed that event-specific relational structure above word-specific information was encoded in the left inferior frontal gyrus. Overall, we found evidence for the encoding of sentence meaning during sentence production in a distributed brain network and for the encoding of event-specific semantic structures in the left inferior frontal gyrus.Additional information
supplementary information -
Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.
Abstract
Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.Additional information
network analysis of dataset A and B -
Hintz, F., & Meyer, A. S. (
Eds. ). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1). -
Hintz, F., Voeten, C. C., Dobó, D., Lukics, K. S., & Lukács, Á. (2024). The role of general cognitive skills in integrating visual and linguistic information during sentence comprehension: Individual differences across the lifespan. Scientific Reports, 14: 17797. doi:10.1038/s41598-024-68674-3.
Abstract
Individuals exhibit massive variability in general cognitive skills that affect language processing. This variability is partly developmental. Here, we recruited a large sample of participants (N = 487), ranging from 9 to 90 years of age, and examined the involvement of nonverbal processing speed (assessed using visual and auditory reaction time tasks) and working memory (assessed using forward and backward Digit Span tasks) in a visual world task. Participants saw two objects on the screen and heard a sentence that referred to one of them. In half of the sentences, the target object could be predicted based on verb-selectional restrictions. We observed evidence for anticipatory processing on predictable compared to non-predictable trials. Visual and auditory processing speed had main effects on sentence comprehension and facilitated predictive processing, as evidenced by an interaction. We observed only weak evidence for the involvement of working memory in predictive sentence comprehension. Age had a nonlinear main effect (younger adults responded faster than children and older adults), but it did not differentially modulate predictive and non-predictive processing, nor did it modulate the involvement of processing speed and working memory. Our results contribute to delineating the cognitive skills that are involved in language-vision interactions.Additional information
supplementary information -
Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.
Abstract
We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl. -
Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.
Abstract
In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing. -
Huettig, F., & Christiansen, M. H. (2024). Can large language models counter the recent decline in literacy levels? An important role for cognitive science. Cognitive Science, 48(8): e13487. doi:10.1111/cogs.13487.
Abstract
Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately US-based tech companies. -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.
Abstract
We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing. -
Koning, M. E. E., Wyman, N. K., Menks, W. M., Ekerdt, C., Fernández, G., Kidd, E., Lemhöfer, K., McQueen, J. M., & Janzen, G. (2024). The relationship between brain structure and function during novel grammar learning across development. Cerebral Cortex, 34(12): bhae488. doi:10.1093/cercor/bhae488.
Abstract
In this study, we explored the relationship between developmental differences in gray matter structure and grammar learning ability in 159 Dutch-speaking individuals (8 to 25 yr). The data were collected as part of a recent large-scale functional MRI study (Menks WM, Ekerdt C, Lemhöfer K, Kidd E, Fernández G, McQueen JM, Janzen G. Developmental changes in brain activation during novel grammar learning in 8–25-year-olds. Dev Cogn Neurosci. 2024;66:101347. https://doi.org/10.1016/j.dcn.2024.101347) in which participants implicitly learned Icelandic morphosyntactic rules and performed a grammaticality judgment task in the scanner. Behaviorally, Menks et al. (2024) showed that grammaticality judgment task performance increased steadily from 8 to 15.4 yr, after which age had no further effect. We show in the current study that this age-related grammaticality judgment task performance was negatively related to cortical gray matter volume and cortical thickness in many clusters throughout the brain. Hippocampal volume was positively related to age-related grammaticality judgment task performance and L2 (English) vocabulary knowledge. Furthermore, we found that grammaticality judgment task performance, L2 grammar proficiency, and L2 vocabulary knowledge were positively related to gray matter maturation within parietal regions, overlapping with the functional MRI clusters that were reported previously in Menks et al. (2024) and which showed increased brain activation in relation to grammar learning. We propose that this overlap in functional and structural results indicates that brain maturation in parietal regions plays an important role in second language learning.Additional information
supplements -
Menks, W. M., Ekerdt, C., Lemhöfer, K., Kidd, E., Fernández, G., McQueen, J. M., & Janzen, G. (2024). Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Developmental Cognitive Neuroscience, 66: 101347. doi:10.1016/j.dcn.2024.101347.
Abstract
While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.Additional information
supplement -
Motiekaitytė, K., Grosseck, O., Wolf, L., Bosker, H. R., Peeters, D., Perlman, M., Ortega, G., & Raviv, L. (2024). Iconicity and compositionality in emerging vocal communication systems: a Virtual Reality approach. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (
Eds. ), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 387-389). Nijmegen: The Evolution of Language Conferences. -
Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2024). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review, 31, 849-861. doi:10.3758/s13423-023-02384-1.
Abstract
* These two authors contributed equally to this study
Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.
Additional information
supplementary file -
Peirolo, M., Meyer, A. S., & Frances, C. (2024). Investigating the causes of prosodic marking in self-repairs: An automatic process? In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 1080-1084). doi:10.21437/SpeechProsody.2024-218.Abstract
Natural speech involves repair. These repairs are often highlighted through prosodic marking (Levelt & Cutler, 1983). Prosodic marking usually entails an increase in pitch, loudness, and/or duration that draws attention to the corrected word. While it is established that natural self-repairs typically elicit prosodic marking, the exact cause of this is unclear. This study investigates whether producing a prosodic marking emerges from an automatic correction process or has a communicative purpose. In the current study, we elicit corrections to test whether all self-corrections elicit prosodic marking. Participants carried out a picture-naming task in which they described two images presented on-screen. To prompt self-correction, the second image was altered in some cases, requiring participants to abandon their initial utterance and correct their description to match the new image. This manipulation was compared to a control condition in which only the orientation of the object would change, eliciting no self-correction while still presenting a visual change. We found that the replacement of the item did not elicit a prosodic marking, regardless of the type of change. Theoretical implications and research directions are discussed, in particular theories of prosodic planning. -
Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2024). The timing of beat gestures affects lexical stress perception in Spanish. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings Speech Prosody 2024 (pp. 702-706). doi:10.21437/SpeechProsody.2024-142.Abstract
It has been shown that when speakers produce hand gestures, addressees are attentive towards these gestures, using them to facilitate speech processing. Even relatively simple “beat” gestures are taken into account to help process aspects of speech such as prosodic prominence. In fact, recent evidence suggests that the timing of a beat gesture can influence spoken word recognition. Termed the manual McGurk Effect, Dutch participants, when presented with lexical stress minimal pair continua in Dutch, were biased to hear lexical stress on the syllable that coincided with a beat gesture. However, little is known about how this manual McGurk effect would surface in languages other than Dutch, with different acoustic cues to prominence, and variable gestures. Therefore, this study tests the effect in Spanish where lexical stress is arguably even more important, being a contrastive cue in the regular verb conjugation system. Results from 24 participants corroborate the effect in Spanish, namely that when given the same auditory stimulus, participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture. These findings extend the manual McGurk effect to a different language, emphasizing the impact of gestures' timing on prosody perception and spoken word recognition. -
Rohrer, P. L., Hong, Y., & Bosker, H. R. (2024). Gestures time to vowel onset and change the acoustics of the word in Mandarin. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 866-870). doi:10.21437/SpeechProsody.2024-175.Abstract
Recent research on multimodal language production has revealed that prominence in speech and gesture go hand-in-hand. Specifically, peaks in gesture (i.e., the apex) seem to closely coordinate with peaks in fundamental frequency (F0). The nature of this relationship may also be bi-directional, as it has also been shown that the production of gesture directly affects speech acoustics. However, most studies on the topic have largely focused on stress-based languages, where fundamental frequency has a prominence-lending function. Less work has been carried out on lexical tone languages such as Mandarin, where F0 is lexically distinctive. In this study, four native Mandarin speakers were asked to produce single monosyllabic CV words, taken from minimal lexical tone triplets (e.g., /pi1/, /pi2/, /pi3/), either with or without a beat gesture. Our analyses of the timing of the gestures showed that the gesture apex most stably occurred near vowel onset, with consonantal duration being the strongest predictor of apex placement. Acoustic analyses revealed that words produced with gesture showed raised F0 contours, greater intensity, and shorter durations. These findings further our understanding of gesture-speech alignment in typologically diverse languages, and add to the discussion about multimodal prominence. -
Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.
Abstract
Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system. -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.
Abstract
Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers. -
Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.Additional information
supplementary data -
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 432-436).Abstract
The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception. -
van der Burght, C. L., & Meyer, A. S. (2024). Interindividual variation in weighting prosodic and semantic cues during sentence comprehension – a partial replication of Van der Burght et al. (2021). In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 792-796). doi:10.21437/SpeechProsody.2024-160.Abstract
Contrastive pitch accents can mark sentence elements occupying parallel roles. In “Mary kissed John, not Peter”, a pitch accent on Mary or John cues the implied syntactic role of Peter. Van der Burght, Friederici, Goucha, and Hartwigsen (2021) showed that listeners can build expectations concerning syntactic and semantic properties of upcoming words, derived from pitch accent information they heard previously. To further explore these expectations, we attempted a partial replication of the original German study in Dutch. In the experimental sentences “Yesterday, the police officer arrested the thief, not the inspector/murderer”, a pitch accent on subject or object cued the subject/object role of the ellipsis clause. Contrasting elements were additionally cued by the thematic role typicality of the nouns. Participants listened to sentences in which the ellipsis clause was omitted and selected the most plausible sentence-final noun (presented visually) via button press. Replicating the original study results, listeners based their sentence-final preference on the pitch accent information available in the sentence. However, as in the original study, individual differences between listeners were found, with some following prosodic information and others relying on a structural bias. The results complement the literature on ellipsis resolution and on interindividual variability in cue weighting. -
van der Burght, C. L., & Meyer, A. S. (2024). Semantic interference across word classes during lexical selection in Dutch. Cognition, 254: 105999. doi:10.1016/j.cognition.2024.105999.
Abstract
Using a novel version of the picture-word interference paradigm, Momma, Buffinton, Slevc, and Phillips (2020, Cognition) showed that word class constrained which words competed with each other for lexical selection. Specifically, in speakers of American English, action verbs (as in she’s singing) competed with semantically related action verbs (as in she’s whistling), but not with semantically related action nouns (as in her whistling). Similarly, action nouns only competed with semantically related action nouns, but not with action verbs. As this pattern has important implications for models of lexical access and sentence generation, we conducted a conceptual replication in Dutch. We found a semantic interference effect, however, contrary to the original study, no evidence for a word class constraint. Together, the results of the two studies argue for graded rather than categorical word class constraints on lexical selection. -
He, J., & Zhang, Q. (2024). Direct retrieval of orthographic representations in Chinese handwritten production: Evidence from a dynamic causal modeling study. Journal of Cognitive Neuroscience, 36(9), 1937-1962. doi:10.1162/jocn_a_02176.
Abstract
This present study identified an optimal model representing the relationship between orthography and phonology in Chinese handwritten production using dynamic causal modeling, and further explored how this model was modulated by word frequency and syllable frequency. Each model contained five volumes of interest in the left hemisphere (angular gyrus [AG], inferior frontal gyrus [IFG], middle frontal gyrus [MFG], superior frontal gyrus [SFG], and supramarginal gyrus [SMG]), with the IFG as the driven input area. Results showed the superiority of a model in which both the MFG and the AG connected with the IFG, supporting the orthography autonomy hypothesis. Word frequency modulated the AG → SFG connection (information flow from the orthographic lexicon to the orthographic buffer), and syllable frequency affected the IFG → MFG connection (information transmission from the semantic system to the phonological lexicon). This study thus provides new insights into the connectivity architecture of neural substrates involved in writing. -
Zhou, Y., van der Burght, C. L., & Meyer, A. S. (2024). Investigating the role of semantics and perceptual salience in the memory benefit of prosodic prominence. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 1250-1254). doi:10.21437/SpeechProsody.2024-252.Abstract
Prosodic prominence can enhance memory for the prominent words. This mnemonic benefit has been linked to listeners’ allocation of attention and deeper processing, which leads to more robust semantic representations. We investigated whether, in addition to the well-established effect at the semantic level, there was a memory benefit for prominent words at the phonological level. To do so, participants (48 native speakers of Dutch), first performed an accent judgement task, where they had to discriminate accented from unaccented words, and accented from unaccented pseudowords. All stimuli were presented in lists. They then performed an old/new recognition task for the stimuli. Accuracy in the accent judgement task was equally high for words and pseudowords. In the recognition task, performance was, as expected, better for words than pseudowords. More importantly, there was an interaction of accent with word type, with a significant advantage for accented compared to unaccented words, but not for pseudowords. The results confirm the memory benefit for accented compared to unaccented words seen in earlier studies, and they are consistent with the view that prominence primarily affects the semantic encoding of words. There was no evidence for an additional memory benefit arising at the phonological level. -
Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.
Abstract
This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception. -
Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.
Abstract
Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena. -
Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.
Abstract
Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension. -
Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (
Eds. ), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.Abstract
Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning. -
Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (
Eds. ), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.Abstract
This chapter focuses on connectionist modeling in language production, highlighting how
core principles of connectionism provide coverage for empirical observations about
representation and selection at the phonological, lexical, and sentence levels. The first
section focuses on the connectionist principles of localist representations and spreading
activation. It discusses how these two principles have motivated classic models of speech
production and shows how they cover results of the picture-word interference paradigm,
the mixed error effect, and aphasic naming errors. The second section focuses on how
newer connectionist models incorporate the principles of learning and distributed
representations through discussion of syntactic priming, cumulative semantic
interference, sequencing errors, phonological blends, and code-switching -
Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.
Abstract
Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.Additional information
psyp13064-sup-0001-s01.docx -
Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.
Abstract
How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place. -
Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.
Abstract
Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipicAdditional information
http://www.bcbl.eu/databases/multipic -
Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.
Abstract
In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
task, little research has investigated how two linguistic tasks are coordinated. However, such research
would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
conversation. In three dual-task experiments we studied how participants coordinated the processing of an
auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
selection took place, the pictures were presented with categorically related or unrelated distractor words. In
Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
representations also implicated in picture naming. The theoretical and methodological implications of the
findings are discussed.Additional information
1-s2.0-S0001691817305589-mmc1.pdf -
Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.
Abstract
Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed -
Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.
Abstract
Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts. -
Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.
Abstract
Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning' -
Huettig, F., Kolinsky, R., & Lachmann, T. (
Eds. ). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3). -
Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.
Abstract
The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research. -
Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.
Abstract
This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent. -
Kochari, A. R., & Ostarek, M. (2018). Introducing a replication-first rule for PhD projects (commmentary on Zwaan et al., ‘Making replication mainstream’). Behavioral and Brain Sciences, 41: e138. doi:10.1017/S0140525X18000730.
Abstract
Zwaan et al. mention that young researchers should conduct replications as a
small part of their portfolio. We extend this proposal and suggest that conducting and
reporting replications should become an integral part of PhD projects and be taken into
account in their assessment. We discuss how this would help not only scientific
advancement, but also PhD candidates’ careers. -
Konopka, A., Meyer, A. S., & Forest, T. A. (2018). Planning to speak in L1 and L2. Cognitive Psychology, 102, 72-104. doi:10.1016/j.cogpsych.2017.12.003.
Abstract
The leading theories of sentence planning – Hierarchical Incrementality and Linear Incrementality – differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2–4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality -
Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.
Abstract
Low-frequency neural entrainment to rhythmic input
has been hypothesized as a canonical mechanism
that shapes sensory perception in time. Neural
entrainment is deemed particularly relevant for
speech analysis, as it would contribute to the extraction
of discrete linguistic elements from continuous
acoustic signals. However, its causal influence in
speech perception has been difficult to establish.
Here, we provide evidence that oscillations build temporal
predictions about the duration of speech tokens
that affect perception. Using magnetoencephalography
(MEG), we studied neural dynamics during
listening to sentences that changed in speech rate.
Weobserved neural entrainment to preceding speech
rhythms persisting for several cycles after the change
in rate. The sustained entrainment was associated
with changes in the perceived duration of the last
word’s vowel, resulting in the perception of words
with different meanings. These findings support oscillatory
models of speech processing, suggesting that
neural oscillations actively shape speech perception. -
Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S. and 68 moreLakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S., Daniels, S., Danielsson, H., DeBruine, L., Dunleavy, D. J., Earp, B. D., Feist, M. I., Ferrelle, J. D., Field, J. G., Fox, N. W., Friesen, A., Gomes, C., Gonzalez-Marquez, M., Grange, J. A., Grieve, A. P., Guggenberger, R., Grist, J., Van Harmelen, A.-L., Hasselman, F., Hochard, K. D., Hoffarth, M. R., Holmes, N. P., Ingre, M., Isager, P. M., Isotalus, H. K., Johansson, C., Juszczyk, K., Kenny, D. A., Khalil, A. A., Konat, B., Lao, J., Larsen, E. G., Lodder, G. M. A., Lukavský, J., Madan, C. R., Manheim, D., Martin, S. R., Martin, A. E., Mayo, D. G., McCarthy, R. J., McConway, K., McFarland, C., Nio, A. Q. X., Nilsonne, G., De Oliveira, C. L., De Xivry, J.-J.-O., Parsons, S., Pfuhl, G., Quinn, K. A., Sakon, J. J., Saribay, S. A., Schneider, I. K., Selvaraju, M., Sjoerds, Z., Smith, S. G., Smits, T., Spies, J. R., Sreekumar, V., Steltenpohl, C. N., Stenhouse, N., Świątkowski, W., Vadillo, M. A., Van Assen, M. A. L. M., Williams, M. N., Williams, S. E., Williams, D. R., Yarkoni, T., Ziano, I., & Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171. doi:10.1038/s41562-018-0311-x.
Abstract
In response to recommendations to redefine statistical significance to P ≤ 0.005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level. -
Lev-Ari, S. (2018). Social network size can influence linguistic malleability and the propagation of linguistic change. Cognition, 176, 31-39. doi:10.1016/j.cognition.2018.03.003.
Abstract
We learn language from our social environment, but the more sources we have, the less informative each source is, and therefore, the less weight we ascribe its input. According to this principle, people with larger social networks should give less weight to new incoming information, and should therefore be less susceptible to the influence of new speakers. This paper tests this prediction, and shows that speakers with smaller social networks indeed have more malleable linguistic representations. In particular, they are more likely to adjust their lexical boundary following exposure to a new speaker. Experiment 2 uses computational simulations to test whether this greater malleability could lead people with smaller social networks to be important for the propagation of linguistic change despite the fact that they interact with fewer people. The results indicate that when innovators were connected with people with smaller rather than larger social networks, the population exhibited greater and faster diffusion. Together these experiments show that the properties of people’s social networks can influence individuals’ learning and use as well as linguistic phenomena at the community level. -
Lev-Ari, S. (2018). The influence of social network size on speech perception. Quarterly Journal of Experimental Psychology, 71(10), 2249-2260. doi:10.1177/1747021817739865.
Abstract
Infants and adults learn new phonological varieties better when exposed to multiple rather than a single speaker. This
article tests whether having a larger social network similarly facilitates phonological performance. Experiment 1 shows
that people with larger social networks are better at vowel perception in noise, indicating that the benefit of laboratory
exposure to multiple speakers extends to real life experience and to adults tested in their native language. Furthermore,
the experiment shows that this association is not due to differences in amount of input or to cognitive differences
between people with different social network sizes. Follow-up computational simulations reveal that the benefit of
larger social networks is mostly due to increased input variability. Additionally, the simulations show that the boost
that larger social networks provide is independent of the amount of input received but is larger if the population is
more heterogeneous. Finally, a comparison of “adult” and “child” simulations reconciles previous conflicting findings
by suggesting that input variability along the relevant dimension might be less useful at the earliest stages of learning.
Together, this article shows when and how the size of our social network influences our speech perception. It thus
shows how aspects of our lifestyle can influence our linguistic performance.Additional information
QJE-STD_17-073.R4-Table_A1.docx -
Mainz, N. (2018). Vocabulary knowledge and learning: Individual differences in adult native speakers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Mani, N., Mishra, R. K., & Huettig, F. (
Eds. ). (2018). The interactive mind: Language, vision and attention. Chennai: Macmillan Publishers India. -
Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (
Eds. ), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India. -
Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.
Abstract
Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing. -
Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.
Abstract
Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.
Abstract
Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings. -
Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.
Abstract
As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.Additional information
Data_Sheet_1.pdf -
Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (
Eds. ), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India. -
Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.
Abstract
Propagation of novel linguistic terms is an important aspect of language use and language
change. Here, we test how social network size influences people’s likelihood of adopting novel
labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
users leads to more varied and malleable hashtag use on Twitter , because each followed user is
ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
malleability and variability in use. These results have implications for theories of language learning, language use, and language change. -
Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.
Abstract
Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.Additional information
Data sets -
Ostarek, M. (2018). Envisioning language: An exploration of perceptual processes in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.
Abstract
Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets. -
Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.
Abstract
A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation. -
Raviv, L., & Arnon, I. (2018). Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181, 160-173. doi:10.1016/j.cognition.2018.08.011.
Abstract
Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date no published study has demonstrated a similar emergence of linguistic structure in children. The lack of evidence from child learners constitutes a problematic
2
gap in the literature: if such learning biases impact the emergence of linguistic structure, they should also be found in children, who are the primary learners in real-life language transmission. However, children may differ from adults in their biases given age-related differences in general cognitive skills. Moreover, adults’ performance on iterated learning tasks may reflect existing (and explicit) linguistic biases, partially undermining the generality of the results. Examining children’s performance can also help evaluate contrasting predictions about their role in emerging languages: do children play a larger or smaller role than adults in the creation of structure? Here, we report a series of four iterated artificial language learning studies (based on Kirby, Cornish & Smith, 2008) with both children and adults, using a novel child-friendly paradigm. Our results show that linguistic structure does not emerge more readily in children compared to adults, and that adults are overall better in both language learning and in creating linguistic structure. When languages could become underspecified (by allowing homonyms), children and adults were similar in developing consistent mappings between meanings and signals in the form of structured ambiguities. However, when homonimity was not allowed, only adults created compositional structure. This study is a first step in using iterated language learning paradigms to explore child-adult differences. It provides the first demonstration that cultural transmission has a different effect on the languages produced by children and adults: While children were able to develop systematicity, their languages did not show compositionality. We focus on the relation between learning and structure creation as a possible explanation for our findings and discuss implications for children’s role in the emergence of linguistic structure. -
Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.
Abstract
Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.Additional information
Video abstract of the article -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (
Eds. ), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096. -
Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.
Abstract
In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research. -
Shao, Z., & Meyer, A. S. (2018). Word priming and interference paradigms. In A. M. B. De Groot, & P. Hagoort (
Eds. ), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 111-129). Hoboken: Wiley. -
Tromp, J. (2018). Indirect request comprehension in different contexts. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.
Abstract
When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings. -
Van Bergen, G., & Bosker, H. R. (2018). Linguistic expectation management in online discourse processing: An investigation of Dutch inderdaad 'indeed' and eigenlijk 'actually'. Journal of Memory and Language, 103, 191-209. doi:10.1016/j.jml.2018.08.004.
Abstract
Interpersonal discourse particles (DPs), such as Dutch inderdaad (≈‘indeed’) and eigenlijk (≈‘actually’) are highly frequent in everyday conversational interaction. Despite extensive theoretical descriptions of their polyfunctionality, little is known about how they are used by language comprehenders. In two visual world eye-tracking experiments involving an online dialogue completion task, we asked to what extent inderdaad, confirming an inferred expectation, and eigenlijk, contrasting with an inferred expectation, influence real-time understanding of dialogues. Answers in the dialogues contained a DP or a control adverb, and a critical discourse referent was replaced by a beep; participants chose the most likely dialogue completion by clicking on one of four referents in a display. Results show that listeners make rapid and fine-grained situation-specific inferences about the use of DPs, modulating their expectations about how the dialogue will unfold. Findings further specify and constrain theories about the conversation-managing function and polyfunctionality of DPs. -
Vromans, R. D., & Jongman, S. R. (2018). The interplay between selective and nonselective inhibition during single word production. PLoS One, 13(5): e0197313. doi:10.1371/journal.pone.0197313.
Abstract
The present study investigated the interplay between selective inhibition (the ability to suppress specific competing responses) and nonselective inhibition (the ability to suppress any inappropriate response) during single word production. To this end, we combined two well-established research paradigms: the picture-word interference task and the stop-signal task. Selective inhibition was assessed by instructing participants to name target pictures (e.g., dog) in the presence of semantically related (e.g., cat) or unrelated (e.g., window) distractor words. Nonselective inhibition was tested by occasionally presenting a visual stop-signal, indicating that participants should withhold their verbal response. The stop-signal was presented early (250 ms) aimed at interrupting the lexical selection stage, and late (325 ms) to influence the word-encoding stage of the speech production process. We found longer naming latencies for pictures with semantically related distractors than with unrelated distractors (semantic interference effect). The results further showed that, at both delays, stopping latencies (i.e., stop-signal RTs) were prolonged for naming pictures with semantically related distractors compared to pictures with unrelated distractors. Taken together, our findings suggest that selective and nonselective inhibition, at least partly, share a common inhibitory mechanism during different stages of the speech production process.Additional information
Data available (link to Figshare) -
Wang, M., Shao, Z., Chen, Y., & Schiller, N. O. (2018). Neural correlates of spoken word production in semantic and phonological blocked cyclic naming. Language, Cognition and Neuroscience, 33(5), 575-586. doi:10.1080/23273798.2017.1395467.
Abstract
The blocked cyclic naming paradigm has been increasingly employed to investigate the mechanisms underlying spoken word production. Semantic homogeneity typically elicits longer naming latencies than heterogeneity; however, it is debated whether competitive lexical selection or incremental learning underlies this effect. The current study manipulated both semantic and phonological homogeneity and used behavioural and electrophysiological measurements to provide evidence that can distinguish between the two accounts. Results show that naming latencies are longer in semantically homogeneous blocks, but shorter in phonologically homogeneous blocks, relative to heterogeneity. The semantic factor significantly modulates electrophysiological waveforms from 200 ms and the phonological factor from 350 ms after picture presentation. A positive component was demonstrated in both manipulations, possibly reflecting a task-related top-down bias in performing blocked cyclic naming. These results provide novel insights into the neural correlates of blocked cyclic naming and further contribute to the understanding of spoken word production.
Share this page