Displaying 1 - 91 of 91
  • Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bosker, H. R., & Cooke, M. (2020). Enhanced amplitude modulations contribute to the Lombard intelligibility benefit: Evidence from the Nijmegen Corpus of Lombard Speech. The Journal of the Acoustical Society of America, 147: 721. doi:10.1121/10.0000646.

    Abstract

    Speakers adjust their voice when talking in noise, which is known as Lombard speech. These acoustic adjustments facilitate speech comprehension in noise relative to plain speech (i.e., speech produced in quiet). However, exactly which characteristics of Lombard speech drive this intelligibility benefit in noise remains unclear. This study assessed the contribution of enhanced amplitude modulations to the Lombard speech intelligibility benefit by demonstrating that (1) native speakers of Dutch in the Nijmegen Corpus of Lombard Speech (NiCLS) produce more pronounced amplitude modulations in noise vs. in quiet; (2) more enhanced amplitude modulations correlate positively with intelligibility in a speech-in-noise perception experiment; (3) transplanting the amplitude modulations from Lombard speech onto plain speech leads to an intelligibility improvement, suggesting that enhanced amplitude modulations in Lombard speech contribute towards intelligibility in noise. Results are discussed in light of recent neurobiological models of speech perception with reference to neural oscillators phase-locking to the amplitude modulations in speech, guiding the processing of speech.
  • Bosker, H. R., Peeters, D., & Holler, J. (2020). How visual cues to speech rate influence speech perception. Quarterly Journal of Experimental Psychology, 73(10), 1523-1536. doi:10.1177/1747021820914564.

    Abstract

    Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear.
  • Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Temporal contrast effects in human speech perception are immune to selective attention. Scientific Reports, 10: 5607. doi:10.1038/s41598-020-62613-8.

    Abstract

    Two fundamental properties of perception are selective attention and perceptual contrast, but how these two processes interact remains unknown. Does an attended stimulus history exert a larger contrastive influence on the perception of a following target than unattended stimuli? Dutch listeners categorized target sounds with a reduced prefix “ge-” marking tense (e.g., ambiguous between gegaan-gaan “gone-go”). In ‘single talker’ Experiments 1–2, participants perceived the reduced syllable (reporting gegaan) when the target was heard after a fast sentence, but not after a slow sentence (reporting gaan). In ‘selective attention’ Experiments 3–5, participants listened to two simultaneous sentences from two different talkers, followed by the same target sounds, with instructions to attend only one of the two talkers. Critically, the speech rates of attended and unattended talkers were found to equally influence target perception – even when participants could watch the attended talker speak. In fact, participants’ target perception in ‘selective attention’ Experiments 3–5 did not differ from participants who were explicitly instructed to divide their attention equally across the two talkers (Experiment 6). This suggests that contrast effects of speech rate are immune to selective attention, largely operating prior to attentional stream segregation in the auditory processing hierarchy.

    Additional information

    Supplementary information
  • Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Spectral contrast effects are modulated by selective attention in ‘cocktail party’ settings. Attention, Perception & Psychophysics, 82, 1318-1332. doi:10.3758/s13414-019-01824-2.

    Abstract

    Speech sounds are perceived relative to spectral properties of surrounding speech. For instance, target words ambiguous between /bɪt/ (with low F1) and /bɛt/ (with high F1) are more likely to be perceived as “bet” after a ‘low F1’ sentence, but as “bit” after a ‘high F1’ sentence. However, it is unclear how these spectral contrast effects (SCEs) operate in multi-talker listening conditions. Recently, Feng and Oxenham [(2018b). J.Exp.Psychol.-Hum.Percept.Perform. 44(9), 1447–1457] reported that selective attention affected SCEs to a small degree, using two simultaneously presented sentences produced by a single talker. The present study assessed the role of selective attention in more naturalistic ‘cocktail party’ settings, with 200 lexically unique sentences, 20 target words, and different talkers. Results indicate that selective attention to one talker in one ear (while ignoring another talker in the other ear) modulates SCEs in such a way that only the spectral properties of the attended talker influences target perception. However, SCEs were much smaller in multi-talker settings (Experiment 2) than those in single-talker settings (Experiment 1). Therefore, the influence of SCEs on speech comprehension in more naturalistic settings (i.e., with competing talkers) may be smaller than estimated based on studies without competing talkers.

    Additional information

    13414_2019_1824_MOESM1_ESM.docx
  • Brehm, L., Hussey, E., & Christianson, K. (2020). The role of word frequency and morpho-orthography in agreement processing. Language, Cognition and Neuroscience, 35(1), 58-77. doi:10.1080/23273798.2019.1631456.

    Abstract

    Agreement attraction in comprehension (when an ungrammatical verb is read quickly if preceded by a feature-matching local noun) is well described by a cue-based retrieval framework. This suggests a role for lexical retrieval in attraction. To examine this, we manipulated two probabilistic factors known to affect lexical retrieval: local noun word frequency and morpho-orthography (agreement morphology realised with or without –s endings) in a self-paced reading study. Noun number and word frequency affected noun and verb region reading times, with higher-frequency words not eliciting attraction. Morpho-orthography impacted verb processing but not attraction: atypical plurals led to slower verb reading times regardless of verb number. Exploratory individual difference analyses further underscore the importance of lexical retrieval dynamics in sentence processing. This provides evidence that agreement operates via a cue-based retrieval mechanism over lexical representations that vary in their strength and association to number features.

    Additional information

    Supplemental material
  • Brysbaert, M., Sui, L., Dirix, N., & Hintz, F. (2020). Dutch Author Recognition Test. Journal of Cognition, 3(1): 6. doi:10.5334/joc.95.

    Abstract

    Book reading shows large individual variability and correlates with better language ability and more empathy. This makes reading exposure an interesting variable to study. Research in English suggests that an author recognition test is the most reliable objective assessment of reading frequency. In this article, we describe the efforts we made to build and test a Dutch author recognition test (DART for older participants and DART_R for younger participants). Our data show that the test is reliable and valid, both in the Netherlands and in Belgium (split-half reliability over .9 with university students, significant correlations with language abilities) and can be used with a young, non-university population. The test is free to use for research purposes.

    Additional information

    Additional Files Additional Files
  • Chan, R. W., Alday, P. M., Zou-Williams, L., Lushington, K., Schlesewsky, M., Bornkessel-Schlesewsky, I., & Immink, M. A. (2020). Focused-attention meditation increases cognitive control during motor sequence performance: Evidence from the N2 cortical evoked potential. Behavioural Brain Research, 384: 112536. doi:10.1016/j.bbr.2020.112536.

    Abstract

    Previous work found that single-session focused attention meditation (FAM) enhanced motor sequence learning through increased cognitive control as a mechanistic action, although electrophysiological correlates of sequence learning performance following FAM were not investigated. We measured the persistent frontal N2 event-related potential (ERP) that is closely related to cognitive control processes and its ability to predict behavioural measures. Twenty-nine participants were randomised to one of three conditions reflecting the level of FAM experienced prior to a serial reaction time task (SRTT): 21 sessions of FAM (FAM21, N = 12), a single FAM session (FAM1, N = 9) or no preceding FAM control (Control, N = 8). Continuous 64-channel EEG were recorded during SRTT and N2 amplitudes for correct trials were extracted. Component amplitude, regions of interests, and behavioural outcomes were compared using mixed effects regression models between groups. FAM21 exhibited faster reaction time performances in majority of the learning blocks compared to FAM1 and Control. FAM21 also demonstrated a significantly more pronounced N2 over majority of anterior and central regions of interests during SRTT compared to the other groups. When N2 amplitudes were modelled against general learning performance, FAM21 showed the greatest rate of amplitude decline over anterior and central regions. The combined results suggest that FAM training provided greater cognitive control enhancement for improved general performance, and less pronounced effects for sequence-specific learning performance compared to the other groups. Importantly, FAM training facilitates dynamic modulation of cognitive control: lower levels of general learning performance was supported by greater levels of activation, whilst higher levels of general learning exhibited less activation.
  • Cross, Z. R., Santamaria, A., Corcoran, A. W., Chatburn, A., Alday, P. M., Coussens, S., & Kohler, M. J. (2020). Individual alpha frequency modulates sleep-related emotional memory consolidation. Neuropsychologia, 148: 107660. doi:10.1016/j.neuropsychologia.2020.107660.

    Abstract

    Alpha-band oscillatory activity is involved in modulating memory and attention. However, few studies have investigated individual differences in oscillatory activity during the encoding of emotional memory, particularly in sleep paradigms where sleep is thought to play an active role in memory consolidation. The current study aimed to address the question of whether individual alpha frequency (IAF) modulates the consolidation of declarative memory across periods of sleep and wake. 22 participants aged 18 – 41 years (mean age = 25.77) viewed 120 emotionally valenced images (positive, negative, neutral) and completed a baseline memory task before a 2hr afternoon sleep opportunity and an equivalent period of wake. Following the sleep and wake conditions, participants were required to distinguish between 120 learned (target) images and 120 new (distractor) images. This method allowed us to delineate the role of different oscillatory components of sleep and wake states in the emotional modulation of memory. Linear mixed-effects models revealed interactions between IAF, rapid eye movement sleep theta power, and slow-wave sleep slow oscillatory density on memory outcomes. These results highlight the importance of individual factors in the EEG in modulating oscillatory-related memory consolidation and subsequent behavioural outcomes and test predictions proposed by models of sleep-based memory consolidation.

    Additional information

    supplementary data
  • Dempsey, J., & Brehm, L. (2020). Can propositional biases modulate syntactic repair processes? Insights from preceding comprehension questions. Journal of Cognitive Psychology, 32(5-6), 543-552. doi:10.1080/20445911.2020.1803884.

    Abstract

    There is an ongoing debate about whether discourse biases can constrain sentence
    processing. Previous work has shown comprehension question accuracy to decrease
    for temporarily ambiguous sentences preceded by a context biasing towards an initial
    misinterpretation, suggesting a role of context for modulating comprehension.
    However, this creates limited modulation of reading times at the disambiguating word,
    suggesting initial syntactic processing may be unaffected by context [Christianson &
    Luke, 2011. Context strengthens initial misinterpretations of text. Scientific Studies of
    Reading, 15(2), 136–166]. The current experiments examine whether propositional and
    structural content from preceding comprehension questions can cue readers to expect
    certain structures in temporarily ambiguous garden-path sentences. The central finding
    is that syntactic repair processes remain unaffected while reading times in other
    regions are modulated by preceding questions. This suggests that reading strategies
    can be superficially influenced by preceding comprehension questions without
    impacting the fidelity of ultimate (mis)representations.

    Additional information

    pecp_a_1803884_sm1217.zip
  • Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences.
  • Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • González Alonso, J., Alemán Bañón, J., DeLuca, V., Miller, D., Pereira Soares, S. M., Puig-Mayenco, E., Slaats, S., & Rothman, J. (2020). Event related potentials at initial exposure in third language acquisition: Implications from an artificial mini-grammar study. Journal of Neurolinguistics, 56: 100939. doi:10.1016/j.jneuroling.2020.100939.

    Abstract

    The present article examines the proposal that typology is a major factor guiding transfer selectivity in L3/Ln acquisition. We tested first exposure in L3/Ln using two artificial languages (ALs) lexically based in English and Spanish, focusing on gender agreement between determiners and nouns, and between nouns and adjectives. 50 L1 Spanish-L2 English speakers took part in the experiment. After receiving implicit training in one of the ALs (Mini-Spanish, N = 26; Mini-English, N = 24), gender violations elicited a fronto-lateral negativity in Mini-English in the earliest time window (200–500 ms), although this was not followed by any other differences in subsequent periods. This effect was highly localized, surfacing only in electrodes of the right-anterior region. In contrast, gender violations in Mini-Spanish elicited a broadly distributed positivity in the 300–600 ms time window. While we do not find typical indices of grammatical processing such as the P600 component, we believe that the between-groups differential appearance of the positivity for gender violations in the 300–600 ms time window reflects differential allocation of attentional resources as a function of the ALs’ lexical similarity to English or Spanish. We take these differences in attention to be precursors of the processes involved in transfer source selection in L3/Ln.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.

    Abstract

    Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

    Additional information

    Supplemental Material
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.

    Abstract

    Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations.
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.

    Abstract

    A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.

    Additional information

    Data files and script
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.

    Abstract

    An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning.
  • Iacozza, S. (2020). Exploring social biases in language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.

    Abstract

    There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.

    Abstract

    In conversation, speech planning can overlap with listening to the interlocutor. It has been
    postulated that once there is enough information to formulate a response, planning is initiated
    and the response is maintained in working memory. Concurrently, the auditory input is
    monitored for the turn end such that responses can be launched promptly. In three EEG
    experiments, we aimed to identify the neural signature of phonological planning and monitoring
    by comparing delayed responding to not responding (reading aloud, repetition and lexical
    decision). These comparisons consistently resulted in a sustained positivity and beta power
    reduction over posterior regions. We argue that these effects reflect attention to the sequence
    end. Phonological planning and maintenance were not detected in the neural signature even
    though it is highly likely these were taking place. This suggests that EEG must be used cautiously
    to identify response planning when the neural signal is overridden by attention effects
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects
  • Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.

    Abstract

    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.
  • Kim, N., Brehm, L., Sturt, P., & Yoshida, M. (2020). How long can you hold the filler: Maintenance and retrieval. Language, Cognition and Neuroscience, 35(1), 17-42. doi:10.1080/23273798.2019.1626456.

    Abstract

    This study attempts to reveal the mechanisms behind the online formation of Wh-Filler-Gap Dependencies (WhFGD). Specifically, we aim to uncover the way in which maintenance and retrieval work in WhFGD processing, by paying special attention to the information that is retrieved when the gap is recognized. We use the agreement attraction phenomenon (Wagers, M. W., Lau, E. F., & Phillips, C. (2009). Agreement attraction in comprehension: Representations and processes. Journal of Memory and Language, 61(2), 206-237) as a probe. The first and second experiments examined the type of information that is maintained and how maintenance is motivated, investigating the retrieved information at the gap for reactivated fillers and definite NPs. The third experiment examined the role of the retrieval, comparing reactivated and active fillers. We contend that the information being accessed reflects the extent to which the filler is maintained, where the reader is able to access fine-grained information including category information as well as a representation of both the head and the modifier at the verb.

    Additional information

    Supplemental material
  • Knudsen, B., Creemers, A., & Meyer, A. S. (2020). Forgotten little words: How backchannels and particles may facilitate speech planning in conversation? Frontiers in Psychology, 11: 593671. doi:10.3389/fpsyg.2020.593671.

    Abstract

    In everyday conversation, turns often follow each other immediately or overlap in time. It has been proposed that speakers achieve this tight temporal coordination between their turns by engaging in linguistic dual-tasking, i.e., by beginning to plan their utterance during the preceding turn. This raises the question of how speakers manage to co-ordinate speech planning and listening with each other. Experimental work addressing this issue has mostly concerned the capacity demands and interference arising when speakers retrieve some content words while listening to others. However, many contributions to conversations are not content words, but backchannels, such as “hm”. Backchannels do not provide much conceptual content and are therefore easy to plan and respond to. To estimate how much they might facilitate speech planning in conversation, we determined their frequency in a Dutch and a German corpus of conversational speech. We found that 19% of the contributions in the Dutch corpus, and 16% of contributions in the German corpus were backchannels. In addition, many turns began with fillers or particles, most often translation equivalents of “yes” or “no,” which are likewise easy to plan.We proposed that to generate comprehensive models of using language in conversation psycholinguists should study not only the generation and processing of content words, as is commonly done, but also consider backchannels, fillers, and particles.
  • Kösem, A., Bosker, H. R., Jensen, O., Hagoort, P., & Riecke, L. (2020). Biasing the perception of spoken words with transcranial alternating current stimulation. Journal of Cognitive Neuroscience, 32(8), 1428-1437. doi:10.1162/jocn_a_01579.

    Abstract

    Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception (Kösem et al. 2018). We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating
    current stimulation (tACS) at distinct oscillatory frequencies (3 Hz and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words
    were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and
    therewith how speech is perceptually sampled, leading to a perceptual over- or underestimation of the vowel’s duration. Whereas results from Experiment 1 did not confirm this prediction, results from experiment 2 suggested a small effect of tACS frequency on target word
    perception: Faster tACS lead to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in experiment 1 vs. experiment 2, suggesting that the
    impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations
    might be a prerequisite for tACS to be effective.

    Additional information

    Data availability
  • Lei, L., Raviv, L., & Alday, P. M. (2020). Using spatial visualizations and real-world social networks to understand language evolution and change. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 252-254). Nijmegen: The Evolution of Language Conferences.
  • Lev-Ari, S., & Sebanz, N. (2020). Interacting with multiple partners improves communication skills. Cognitive Science, 44(4): e12836. doi:10.1111/cogs.12836.

    Abstract

    Successful communication is important for both society and people’s personal life. Here we show that people can improve their communication skills by interacting with multiple others, and that this improvement seems to come about by a greater tendency to take the addressee’s perspective when there are multiple partners. In Experiment 1, during a training phase, participants described figures to a new partner in each round or to the same partner in all rounds. Then all participants interacted with a new partner and their recordings from that round were presented to naïve listeners. Participants who had interacted with multiple partners during training were better understood. This occurred despite the fact that the partners had not provided the participants with any input other than feedback on comprehension during the interaction. In Experiment 2, participants were asked to provide descriptions to a different future participant in each round or to the same future participant in all rounds. Next they performed a surprise memory test designed to tap memory for global details, in line with the addressee’s perspective. Those who had provided descriptions for multiple future participants performed better. These results indicate that people can improve their communication skills by interacting with multiple people, and that this advantage might be due to a greater tendency to take the addressee’s perspective in such cases. Our findings thus show how the social environment can influence our communication skills by shaping our own behavior during interaction in a manner that promotes the development of our communication skills.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2020). Eye-tracking the time course of distal and global speech rate effects. Journal of Experimental Psychology: Human Perception and Performance, 46(10), 1148-1163. doi:10.1037/xhp0000838.

    Abstract

    To comprehend speech sounds, listeners tune in to speech rate information in the proximal (immediately adjacent), distal (non-adjacent), and global context (further removed preceding and following sentences). Effects of global contextual speech rate cues on speech perception have been shown to follow constraints not found for proximal and distal speech rate. Therefore, listeners may process such global cues at distinct time points during word recognition. We conducted a printed-word eye-tracking experiment to compare the time courses of distal and global rate effects. Results indicated that the distal rate effect emerged immediately after target sound presentation, in line with a general-auditory account. The global rate effect, however, arose more than 200 ms later than the distal rate effect, indicating that distal and global context effects involve distinct processing mechanisms. Results are interpreted in a two-stage model of acoustic context effects. This model posits that distal context effects involve very early perceptual processes, while global context effects arise at a later stage, involving cognitive adjustments conditioned by higher-level information.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Raviv, L. (2020). Language and society: How social pressures shape grammatical structure. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). Network structure and the cultural evolution of linguistic structure: A group communication experiment. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 359-361). Nijmegen: The Evolution of Language Conferences.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.

    Abstract

    Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift).
  • Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.

    Abstract

    That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.

    Additional information

    Supplemental material
  • Rodd, J. (2020). How speaking fast is like running: Modelling control of speaking rate. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.

    Abstract

    Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual.
  • Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.

    Abstract

    In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.
  • Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.

    Abstract

    This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia.
  • Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.

    Abstract

    Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.

    Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.

    Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.

    Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control.
  • Thompson, B., Raviv, L., & Kirby, S. (2020). Complexity can be maintained in small populations: A model of lexical variability in emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 440-442). Nijmegen: The Evolution of Language Conferences.
  • Van Os, M., De Jong, N. H., & Bosker, H. R. (2020). Fluency in dialogue: Turn‐taking behavior shapes perceived fluency in native and nonnative speech. Language Learning, 70(4), 1183-1217. doi:10.1111/lang.12416.

    Abstract

    Fluency is an important part of research on second language learning, but most research on language proficiency typically has not included oral fluency as part of interaction, even though natural communication usually occurs in conversations. The present study considered aspects of turn-taking behavior as part of the construct of fluency and investigated whether these aspects differentially influence perceived fluency ratings of native and non-native speech. Results from two experiments using acoustically manipulated speech showed that, in native speech, too ‘eager’ (interrupting a question with a fast answer) and too ‘reluctant’ answers (answering slowly after a long turn gap) negatively affected fluency ratings. However, in non-native speech, only too ‘reluctant’ answers led to lower fluency ratings. Thus, we demonstrate that acoustic properties of dialogue are perceived as part of fluency. By adding to our current understanding of dialogue fluency, these lab-based findings carry implications for language teaching and assessment

    Additional information

    data + R analysis script via osf
  • Van Lipzig, E. v., Creemers, A., & Don, J. (2020). Morphological processing in nominalizations. Linguistics in the Netherlands, 37, 165-179. doi:10.1075/avt.00044.lip.

    Abstract

    A major debate in psycholinguistics concerns the representation of morphological structure in the mental lexicon. We report the results of an auditory primed lexical decision experiment in which we tested whether verbs prime their nominalizations in Dutch. We find morphological priming effects with regular nominalizations (schorsen ‘suspend’ → schorsing ‘suspension’) as well as with irregular nominalizations (schieten ‘shoot’ → schot ‘shot’). On this basis, we claim that, despite the lack of phonological identity between stem and derivation in the case of irregular nominalizations, the morphological relation between the two forms, suffices to evoke a priming effect. However, an alternative explanation, according to which the semantic relation in combination with the phonological overlap accounts for the priming effect, cannot be excluded
  • Zormpa, E. (2020). Memory for speaking and listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place.
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.

    Abstract

    Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning'
  • Huettig, F., Kolinsky, R., & Lachmann, T. (Eds.). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3).
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.

    Abstract

    This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent.
  • Kochari, A. R., & Ostarek, M. (2018). Introducing a replication-first rule for PhD projects (commmentary on Zwaan et al., ‘Making replication mainstream’). Behavioral and Brain Sciences, 41: e138. doi:10.1017/S0140525X18000730.

    Abstract

    Zwaan et al. mention that young researchers should conduct replications as a
    small part of their portfolio. We extend this proposal and suggest that conducting and
    reporting replications should become an integral part of PhD projects and be taken into
    account in their assessment. We discuss how this would help not only scientific
    advancement, but also PhD candidates’ careers.
  • Konopka, A., Meyer, A. S., & Forest, T. A. (2018). Planning to speak in L1 and L2. Cognitive Psychology, 102, 72-104. doi:10.1016/j.cogpsych.2017.12.003.

    Abstract

    The leading theories of sentence planning – Hierarchical Incrementality and Linear Incrementality – differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2–4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality
  • Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.

    Abstract

    Low-frequency neural entrainment to rhythmic input
    has been hypothesized as a canonical mechanism
    that shapes sensory perception in time. Neural
    entrainment is deemed particularly relevant for
    speech analysis, as it would contribute to the extraction
    of discrete linguistic elements from continuous
    acoustic signals. However, its causal influence in
    speech perception has been difficult to establish.
    Here, we provide evidence that oscillations build temporal
    predictions about the duration of speech tokens
    that affect perception. Using magnetoencephalography
    (MEG), we studied neural dynamics during
    listening to sentences that changed in speech rate.
    Weobserved neural entrainment to preceding speech
    rhythms persisting for several cycles after the change
    in rate. The sustained entrainment was associated
    with changes in the perceived duration of the last
    word’s vowel, resulting in the perception of words
    with different meanings. These findings support oscillatory
    models of speech processing, suggesting that
    neural oscillations actively shape speech perception.
  • Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S. and 68 moreLakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S., Daniels, S., Danielsson, H., DeBruine, L., Dunleavy, D. J., Earp, B. D., Feist, M. I., Ferrelle, J. D., Field, J. G., Fox, N. W., Friesen, A., Gomes, C., Gonzalez-Marquez, M., Grange, J. A., Grieve, A. P., Guggenberger, R., Grist, J., Van Harmelen, A.-L., Hasselman, F., Hochard, K. D., Hoffarth, M. R., Holmes, N. P., Ingre, M., Isager, P. M., Isotalus, H. K., Johansson, C., Juszczyk, K., Kenny, D. A., Khalil, A. A., Konat, B., Lao, J., Larsen, E. G., Lodder, G. M. A., Lukavský, J., Madan, C. R., Manheim, D., Martin, S. R., Martin, A. E., Mayo, D. G., McCarthy, R. J., McConway, K., McFarland, C., Nio, A. Q. X., Nilsonne, G., De Oliveira, C. L., De Xivry, J.-J.-O., Parsons, S., Pfuhl, G., Quinn, K. A., Sakon, J. J., Saribay, S. A., Schneider, I. K., Selvaraju, M., Sjoerds, Z., Smith, S. G., Smits, T., Spies, J. R., Sreekumar, V., Steltenpohl, C. N., Stenhouse, N., Świątkowski, W., Vadillo, M. A., Van Assen, M. A. L. M., Williams, M. N., Williams, S. E., Williams, D. R., Yarkoni, T., Ziano, I., & Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171. doi:10.1038/s41562-018-0311-x.

    Abstract

    In response to recommendations to redefine statistical significance to P ≤ 0.005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.
  • Lev-Ari, S. (2018). Social network size can influence linguistic malleability and the propagation of linguistic change. Cognition, 176, 31-39. doi:10.1016/j.cognition.2018.03.003.

    Abstract

    We learn language from our social environment, but the more sources we have, the less informative each source is, and therefore, the less weight we ascribe its input. According to this principle, people with larger social networks should give less weight to new incoming information, and should therefore be less susceptible to the influence of new speakers. This paper tests this prediction, and shows that speakers with smaller social networks indeed have more malleable linguistic representations. In particular, they are more likely to adjust their lexical boundary following exposure to a new speaker. Experiment 2 uses computational simulations to test whether this greater malleability could lead people with smaller social networks to be important for the propagation of linguistic change despite the fact that they interact with fewer people. The results indicate that when innovators were connected with people with smaller rather than larger social networks, the population exhibited greater and faster diffusion. Together these experiments show that the properties of people’s social networks can influence individuals’ learning and use as well as linguistic phenomena at the community level.
  • Lev-Ari, S. (2018). The influence of social network size on speech perception. Quarterly Journal of Experimental Psychology, 71(10), 2249-2260. doi:10.1177/1747021817739865.

    Abstract

    Infants and adults learn new phonological varieties better when exposed to multiple rather than a single speaker. This
    article tests whether having a larger social network similarly facilitates phonological performance. Experiment 1 shows
    that people with larger social networks are better at vowel perception in noise, indicating that the benefit of laboratory
    exposure to multiple speakers extends to real life experience and to adults tested in their native language. Furthermore,
    the experiment shows that this association is not due to differences in amount of input or to cognitive differences
    between people with different social network sizes. Follow-up computational simulations reveal that the benefit of
    larger social networks is mostly due to increased input variability. Additionally, the simulations show that the boost
    that larger social networks provide is independent of the amount of input received but is larger if the population is
    more heterogeneous. Finally, a comparison of “adult” and “child” simulations reconciles previous conflicting findings
    by suggesting that input variability along the relevant dimension might be less useful at the earliest stages of learning.
    Together, this article shows when and how the size of our social network influences our speech perception. It thus
    shows how aspects of our lifestyle can influence our linguistic performance.

    Additional information

    QJE-STD_17-073.R4-Table_A1.docx
  • Mainz, N. (2018). Vocabulary knowledge and learning: Individual differences in adult native speakers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mani, N., Mishra, R. K., & Huettig, F. (Eds.). (2018). The interactive mind: Language, vision and attention. Chennai: Macmillan Publishers India.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.

    Abstract

    Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing.
  • Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.

    Abstract

    Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.

    Abstract

    Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.

    Abstract

    Propagation of novel linguistic terms is an important aspect of language use and language
    change. Here, we test how social network size influences people’s likelihood of adopting novel
    labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
    users leads to more varied and malleable hashtag use on Twitter , because each followed user is
    ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
    users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
    fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
    malleability and variability in use. These results have implications for theories of language learning, language use, and language change.
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Ostarek, M. (2018). Envisioning language: An exploration of perceptual processes in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.

    Abstract

    A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
  • Raviv, L., & Arnon, I. (2018). Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181, 160-173. doi:10.1016/j.cognition.2018.08.011.

    Abstract

    Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date no published study has demonstrated a similar emergence of linguistic structure in children. The lack of evidence from child learners constitutes a problematic
    2
    gap in the literature: if such learning biases impact the emergence of linguistic structure, they should also be found in children, who are the primary learners in real-life language transmission. However, children may differ from adults in their biases given age-related differences in general cognitive skills. Moreover, adults’ performance on iterated learning tasks may reflect existing (and explicit) linguistic biases, partially undermining the generality of the results. Examining children’s performance can also help evaluate contrasting predictions about their role in emerging languages: do children play a larger or smaller role than adults in the creation of structure? Here, we report a series of four iterated artificial language learning studies (based on Kirby, Cornish & Smith, 2008) with both children and adults, using a novel child-friendly paradigm. Our results show that linguistic structure does not emerge more readily in children compared to adults, and that adults are overall better in both language learning and in creating linguistic structure. When languages could become underspecified (by allowing homonyms), children and adults were similar in developing consistent mappings between meanings and signals in the form of structured ambiguities. However, when homonimity was not allowed, only adults created compositional structure. This study is a first step in using iterated language learning paradigms to explore child-adult differences. It provides the first demonstration that cultural transmission has a different effect on the languages produced by children and adults: While children were able to develop systematicity, their languages did not show compositionality. We focus on the relation between learning and structure creation as a possible explanation for our findings and discuss implications for children’s role in the emergence of linguistic structure.

    Additional information

    results A results B results D stimuli
  • Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.

    Abstract

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.

    Additional information

    Video abstract of the article
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096.
  • Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

    Abstract

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
  • Shao, Z., & Meyer, A. S. (2018). Word priming and interference paradigms. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 111-129). Hoboken: Wiley.
  • Tromp, J. (2018). Indirect request comprehension in different contexts. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Van Bergen, G., & Bosker, H. R. (2018). Linguistic expectation management in online discourse processing: An investigation of Dutch inderdaad 'indeed' and eigenlijk 'actually'. Journal of Memory and Language, 103, 191-209. doi:10.1016/j.jml.2018.08.004.

    Abstract

    Interpersonal discourse particles (DPs), such as Dutch inderdaad (≈‘indeed’) and eigenlijk (≈‘actually’) are highly frequent in everyday conversational interaction. Despite extensive theoretical descriptions of their polyfunctionality, little is known about how they are used by language comprehenders. In two visual world eye-tracking experiments involving an online dialogue completion task, we asked to what extent inderdaad, confirming an inferred expectation, and eigenlijk, contrasting with an inferred expectation, influence real-time understanding of dialogues. Answers in the dialogues contained a DP or a control adverb, and a critical discourse referent was replaced by a beep; participants chose the most likely dialogue completion by clicking on one of four referents in a display. Results show that listeners make rapid and fine-grained situation-specific inferences about the use of DPs, modulating their expectations about how the dialogue will unfold. Findings further specify and constrain theories about the conversation-managing function and polyfunctionality of DPs.
  • Vromans, R. D., & Jongman, S. R. (2018). The interplay between selective and nonselective inhibition during single word production. PLoS One, 13(5): e0197313. doi:10.1371/journal.pone.0197313.

    Abstract

    The present study investigated the interplay between selective inhibition (the ability to suppress specific competing responses) and nonselective inhibition (the ability to suppress any inappropriate response) during single word production. To this end, we combined two well-established research paradigms: the picture-word interference task and the stop-signal task. Selective inhibition was assessed by instructing participants to name target pictures (e.g., dog) in the presence of semantically related (e.g., cat) or unrelated (e.g., window) distractor words. Nonselective inhibition was tested by occasionally presenting a visual stop-signal, indicating that participants should withhold their verbal response. The stop-signal was presented early (250 ms) aimed at interrupting the lexical selection stage, and late (325 ms) to influence the word-encoding stage of the speech production process. We found longer naming latencies for pictures with semantically related distractors than with unrelated distractors (semantic interference effect). The results further showed that, at both delays, stopping latencies (i.e., stop-signal RTs) were prolonged for naming pictures with semantically related distractors compared to pictures with unrelated distractors. Taken together, our findings suggest that selective and nonselective inhibition, at least partly, share a common inhibitory mechanism during different stages of the speech production process.

    Additional information

    Data available (link to Figshare)
  • Wang, M., Shao, Z., Chen, Y., & Schiller, N. O. (2018). Neural correlates of spoken word production in semantic and phonological blocked cyclic naming. Language, Cognition and Neuroscience, 33(5), 575-586. doi:10.1080/23273798.2017.1395467.

    Abstract

    The blocked cyclic naming paradigm has been increasingly employed to investigate the mechanisms underlying spoken word production. Semantic homogeneity typically elicits longer naming latencies than heterogeneity; however, it is debated whether competitive lexical selection or incremental learning underlies this effect. The current study manipulated both semantic and phonological homogeneity and used behavioural and electrophysiological measurements to provide evidence that can distinguish between the two accounts. Results show that naming latencies are longer in semantically homogeneous blocks, but shorter in phonologically homogeneous blocks, relative to heterogeneity. The semantic factor significantly modulates electrophysiological waveforms from 200 ms and the phonological factor from 350 ms after picture presentation. A positive component was demonstrated in both manipulations, possibly reflecting a task-related top-down bias in performing blocked cyclic naming. These results provide novel insights into the neural correlates of blocked cyclic naming and further contribute to the understanding of spoken word production.

Share this page