Displaying 1 - 86 of 86
-
Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Bosker, H. R., & Cooke, M. (2020). Enhanced amplitude modulations contribute to the Lombard intelligibility benefit: Evidence from the Nijmegen Corpus of Lombard Speech. The Journal of the Acoustical Society of America, 147: 721. doi:10.1121/10.0000646.
Abstract
Speakers adjust their voice when talking in noise, which is known as Lombard speech. These acoustic adjustments facilitate speech comprehension in noise relative to plain speech (i.e., speech produced in quiet). However, exactly which characteristics of Lombard speech drive this intelligibility benefit in noise remains unclear. This study assessed the contribution of enhanced amplitude modulations to the Lombard speech intelligibility benefit by demonstrating that (1) native speakers of Dutch in the Nijmegen Corpus of Lombard Speech (NiCLS) produce more pronounced amplitude modulations in noise vs. in quiet; (2) more enhanced amplitude modulations correlate positively with intelligibility in a speech-in-noise perception experiment; (3) transplanting the amplitude modulations from Lombard speech onto plain speech leads to an intelligibility improvement, suggesting that enhanced amplitude modulations in Lombard speech contribute towards intelligibility in noise. Results are discussed in light of recent neurobiological models of speech perception with reference to neural oscillators phase-locking to the amplitude modulations in speech, guiding the processing of speech. -
Bosker, H. R., Peeters, D., & Holler, J. (2020). How visual cues to speech rate influence speech perception. Quarterly Journal of Experimental Psychology, 73(10), 1523-1536. doi:10.1177/1747021820914564.
Abstract
Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear. -
Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Temporal contrast effects in human speech perception are immune to selective attention. Scientific Reports, 10: 5607. doi:10.1038/s41598-020-62613-8.
Abstract
Two fundamental properties of perception are selective attention and perceptual contrast, but how these two processes interact remains unknown. Does an attended stimulus history exert a larger contrastive influence on the perception of a following target than unattended stimuli? Dutch listeners categorized target sounds with a reduced prefix “ge-” marking tense (e.g., ambiguous between gegaan-gaan “gone-go”). In ‘single talker’ Experiments 1–2, participants perceived the reduced syllable (reporting gegaan) when the target was heard after a fast sentence, but not after a slow sentence (reporting gaan). In ‘selective attention’ Experiments 3–5, participants listened to two simultaneous sentences from two different talkers, followed by the same target sounds, with instructions to attend only one of the two talkers. Critically, the speech rates of attended and unattended talkers were found to equally influence target perception – even when participants could watch the attended talker speak. In fact, participants’ target perception in ‘selective attention’ Experiments 3–5 did not differ from participants who were explicitly instructed to divide their attention equally across the two talkers (Experiment 6). This suggests that contrast effects of speech rate are immune to selective attention, largely operating prior to attentional stream segregation in the auditory processing hierarchy.Additional information
Supplementary information -
Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Spectral contrast effects are modulated by selective attention in ‘cocktail party’ settings. Attention, Perception & Psychophysics, 82, 1318-1332. doi:10.3758/s13414-019-01824-2.
Abstract
Speech sounds are perceived relative to spectral properties of surrounding speech. For instance, target words ambiguous between /bɪt/ (with low F1) and /bɛt/ (with high F1) are more likely to be perceived as “bet” after a ‘low F1’ sentence, but as “bit” after a ‘high F1’ sentence. However, it is unclear how these spectral contrast effects (SCEs) operate in multi-talker listening conditions. Recently, Feng and Oxenham [(2018b). J.Exp.Psychol.-Hum.Percept.Perform. 44(9), 1447–1457] reported that selective attention affected SCEs to a small degree, using two simultaneously presented sentences produced by a single talker. The present study assessed the role of selective attention in more naturalistic ‘cocktail party’ settings, with 200 lexically unique sentences, 20 target words, and different talkers. Results indicate that selective attention to one talker in one ear (while ignoring another talker in the other ear) modulates SCEs in such a way that only the spectral properties of the attended talker influences target perception. However, SCEs were much smaller in multi-talker settings (Experiment 2) than those in single-talker settings (Experiment 1). Therefore, the influence of SCEs on speech comprehension in more naturalistic settings (i.e., with competing talkers) may be smaller than estimated based on studies without competing talkers.Additional information
13414_2019_1824_MOESM1_ESM.docx -
Brehm, L., Hussey, E., & Christianson, K. (2020). The role of word frequency and morpho-orthography in agreement processing. Language, Cognition and Neuroscience, 35(1), 58-77. doi:10.1080/23273798.2019.1631456.
Abstract
Agreement attraction in comprehension (when an ungrammatical verb is read quickly if preceded by a feature-matching local noun) is well described by a cue-based retrieval framework. This suggests a role for lexical retrieval in attraction. To examine this, we manipulated two probabilistic factors known to affect lexical retrieval: local noun word frequency and morpho-orthography (agreement morphology realised with or without –s endings) in a self-paced reading study. Noun number and word frequency affected noun and verb region reading times, with higher-frequency words not eliciting attraction. Morpho-orthography impacted verb processing but not attraction: atypical plurals led to slower verb reading times regardless of verb number. Exploratory individual difference analyses further underscore the importance of lexical retrieval dynamics in sentence processing. This provides evidence that agreement operates via a cue-based retrieval mechanism over lexical representations that vary in their strength and association to number features.Additional information
Supplemental material -
Brysbaert, M., Sui, L., Dirix, N., & Hintz, F. (2020). Dutch Author Recognition Test. Journal of Cognition, 3(1): 6. doi:10.5334/joc.95.
Abstract
Book reading shows large individual variability and correlates with better language ability and more empathy. This makes reading exposure an interesting variable to study. Research in English suggests that an author recognition test is the most reliable objective assessment of reading frequency. In this article, we describe the efforts we made to build and test a Dutch author recognition test (DART for older participants and DART_R for younger participants). Our data show that the test is reliable and valid, both in the Netherlands and in Belgium (split-half reliability over .9 with university students, significant correlations with language abilities) and can be used with a young, non-university population. The test is free to use for research purposes. -
Chan, R. W., Alday, P. M., Zou-Williams, L., Lushington, K., Schlesewsky, M., Bornkessel-Schlesewsky, I., & Immink, M. A. (2020). Focused-attention meditation increases cognitive control during motor sequence performance: Evidence from the N2 cortical evoked potential. Behavioural Brain Research, 384: 112536. doi:10.1016/j.bbr.2020.112536.
Abstract
Previous work found that single-session focused attention meditation (FAM) enhanced motor sequence learning through increased cognitive control as a mechanistic action, although electrophysiological correlates of sequence learning performance following FAM were not investigated. We measured the persistent frontal N2 event-related potential (ERP) that is closely related to cognitive control processes and its ability to predict behavioural measures. Twenty-nine participants were randomised to one of three conditions reflecting the level of FAM experienced prior to a serial reaction time task (SRTT): 21 sessions of FAM (FAM21, N = 12), a single FAM session (FAM1, N = 9) or no preceding FAM control (Control, N = 8). Continuous 64-channel EEG were recorded during SRTT and N2 amplitudes for correct trials were extracted. Component amplitude, regions of interests, and behavioural outcomes were compared using mixed effects regression models between groups. FAM21 exhibited faster reaction time performances in majority of the learning blocks compared to FAM1 and Control. FAM21 also demonstrated a significantly more pronounced N2 over majority of anterior and central regions of interests during SRTT compared to the other groups. When N2 amplitudes were modelled against general learning performance, FAM21 showed the greatest rate of amplitude decline over anterior and central regions. The combined results suggest that FAM training provided greater cognitive control enhancement for improved general performance, and less pronounced effects for sequence-specific learning performance compared to the other groups. Importantly, FAM training facilitates dynamic modulation of cognitive control: lower levels of general learning performance was supported by greater levels of activation, whilst higher levels of general learning exhibited less activation. -
Cross, Z. R., Santamaria, A., Corcoran, A. W., Chatburn, A., Alday, P. M., Coussens, S., & Kohler, M. J. (2020). Individual alpha frequency modulates sleep-related emotional memory consolidation. Neuropsychologia, 148: 107660. doi:10.1016/j.neuropsychologia.2020.107660.
Abstract
Alpha-band oscillatory activity is involved in modulating memory and attention. However, few studies have investigated individual differences in oscillatory activity during the encoding of emotional memory, particularly in sleep paradigms where sleep is thought to play an active role in memory consolidation. The current study aimed to address the question of whether individual alpha frequency (IAF) modulates the consolidation of declarative memory across periods of sleep and wake. 22 participants aged 18 – 41 years (mean age = 25.77) viewed 120 emotionally valenced images (positive, negative, neutral) and completed a baseline memory task before a 2hr afternoon sleep opportunity and an equivalent period of wake. Following the sleep and wake conditions, participants were required to distinguish between 120 learned (target) images and 120 new (distractor) images. This method allowed us to delineate the role of different oscillatory components of sleep and wake states in the emotional modulation of memory. Linear mixed-effects models revealed interactions between IAF, rapid eye movement sleep theta power, and slow-wave sleep slow oscillatory density on memory outcomes. These results highlight the importance of individual factors in the EEG in modulating oscillatory-related memory consolidation and subsequent behavioural outcomes and test predictions proposed by models of sleep-based memory consolidation.Additional information
supplementary data -
Dempsey, J., & Brehm, L. (2020). Can propositional biases modulate syntactic repair processes? Insights from preceding comprehension questions. Journal of Cognitive Psychology, 32(5-6), 543-552. doi:10.1080/20445911.2020.1803884.
Abstract
There is an ongoing debate about whether discourse biases can constrain sentence
processing. Previous work has shown comprehension question accuracy to decrease
for temporarily ambiguous sentences preceded by a context biasing towards an initial
misinterpretation, suggesting a role of context for modulating comprehension.
However, this creates limited modulation of reading times at the disambiguating word,
suggesting initial syntactic processing may be unaffected by context [Christianson &
Luke, 2011. Context strengthens initial misinterpretations of text. Scientific Studies of
Reading, 15(2), 136–166]. The current experiments examine whether propositional and
structural content from preceding comprehension questions can cue readers to expect
certain structures in temporarily ambiguous garden-path sentences. The central finding
is that syntactic repair processes remain unaffected while reading times in other
regions are modulated by preceding questions. This suggests that reading strategies
can be superficially influenced by preceding comprehension questions without
impacting the fidelity of ultimate (mis)representations.Additional information
pecp_a_1803884_sm1217.zip -
Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences. -
Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
González Alonso, J., Alemán Bañón, J., DeLuca, V., Miller, D., Pereira Soares, S. M., Puig-Mayenco, E., Slaats, S., & Rothman, J. (2020). Event related potentials at initial exposure in third language acquisition: Implications from an artificial mini-grammar study. Journal of Neurolinguistics, 56: 100939. doi:10.1016/j.jneuroling.2020.100939.
Abstract
The present article examines the proposal that typology is a major factor guiding transfer selectivity in L3/Ln acquisition. We tested first exposure in L3/Ln using two artificial languages (ALs) lexically based in English and Spanish, focusing on gender agreement between determiners and nouns, and between nouns and adjectives. 50 L1 Spanish-L2 English speakers took part in the experiment. After receiving implicit training in one of the ALs (Mini-Spanish, N = 26; Mini-English, N = 24), gender violations elicited a fronto-lateral negativity in Mini-English in the earliest time window (200–500 ms), although this was not followed by any other differences in subsequent periods. This effect was highly localized, surfacing only in electrodes of the right-anterior region. In contrast, gender violations in Mini-Spanish elicited a broadly distributed positivity in the 300–600 ms time window. While we do not find typical indices of grammatical processing such as the P600 component, we believe that the between-groups differential appearance of the positivity for gender violations in the 300–600 ms time window reflects differential allocation of attentional resources as a function of the ALs’ lexical similarity to English or Spanish. We take these differences in attention to be precursors of the processes involved in transfer source selection in L3/Ln. -
Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (
Eds. ), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.Abstract
The representations generated by many mod-
els of language (word embeddings, recurrent
neural networks and transformers) correlate
to brain activity recorded while people read.
However, these decoding results are usually
based on the brain’s reaction to syntactically
and semantically sound language stimuli. In
this study, we asked: how does an LSTM (long
short term memory) language model, trained
(by and large) on semantically and syntac-
tically intact language, represent a language
sample with degraded semantic or syntactic
information? Does the LSTM representation
still resemble the brain’s reaction? We found
that, even for some kinds of nonsensical lan-
guage, there is a statistically significant rela-
tionship between the brain’s activity and the
representations of an LSTM. This indicates
that, at least in some instances, LSTMs and the
human brain handle nonsensical data similarly. -
Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.
Abstract
Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.Additional information
Supplemental Material -
Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.
Abstract
Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations. -
Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.
Abstract
- * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes). -
Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.
Abstract
This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills. -
Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.
Abstract
A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.Additional information
Data files and script -
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.
Abstract
An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning. -
Iacozza, S. (2020). Exploring social biases in language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.
Abstract
There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention. -
Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.
Abstract
In conversation, speech planning can overlap with listening to the interlocutor. It has been
postulated that once there is enough information to formulate a response, planning is initiated
and the response is maintained in working memory. Concurrently, the auditory input is
monitored for the turn end such that responses can be launched promptly. In three EEG
experiments, we aimed to identify the neural signature of phonological planning and monitoring
by comparing delayed responding to not responding (reading aloud, repetition and lexical
decision). These comparisons consistently resulted in a sustained positivity and beta power
reduction over posterior regions. We argue that these effects reflect attention to the sequence
end. Phonological planning and maintenance were not detected in the neural signature even
though it is highly likely these were taking place. This suggests that EEG must be used cautiously
to identify response planning when the neural signal is overridden by attention effects -
Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.
Abstract
Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing -
Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.
Abstract
During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects -
Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.
Abstract
Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information. -
Kim, N., Brehm, L., Sturt, P., & Yoshida, M. (2020). How long can you hold the filler: Maintenance and retrieval. Language, Cognition and Neuroscience, 35(1), 17-42. doi:10.1080/23273798.2019.1626456.
Abstract
This study attempts to reveal the mechanisms behind the online formation of Wh-Filler-Gap Dependencies (WhFGD). Specifically, we aim to uncover the way in which maintenance and retrieval work in WhFGD processing, by paying special attention to the information that is retrieved when the gap is recognized. We use the agreement attraction phenomenon (Wagers, M. W., Lau, E. F., & Phillips, C. (2009). Agreement attraction in comprehension: Representations and processes. Journal of Memory and Language, 61(2), 206-237) as a probe. The first and second experiments examined the type of information that is maintained and how maintenance is motivated, investigating the retrieved information at the gap for reactivated fillers and definite NPs. The third experiment examined the role of the retrieval, comparing reactivated and active fillers. We contend that the information being accessed reflects the extent to which the filler is maintained, where the reader is able to access fine-grained information including category information as well as a representation of both the head and the modifier at the verb.Additional information
Supplemental material -
Knudsen, B., Creemers, A., & Meyer, A. S. (2020). Forgotten little words: How backchannels and particles may facilitate speech planning in conversation? Frontiers in Psychology, 11: 593671. doi:10.3389/fpsyg.2020.593671.
Abstract
In everyday conversation, turns often follow each other immediately or overlap in time. It has been proposed that speakers achieve this tight temporal coordination between their turns by engaging in linguistic dual-tasking, i.e., by beginning to plan their utterance during the preceding turn. This raises the question of how speakers manage to co-ordinate speech planning and listening with each other. Experimental work addressing this issue has mostly concerned the capacity demands and interference arising when speakers retrieve some content words while listening to others. However, many contributions to conversations are not content words, but backchannels, such as “hm”. Backchannels do not provide much conceptual content and are therefore easy to plan and respond to. To estimate how much they might facilitate speech planning in conversation, we determined their frequency in a Dutch and a German corpus of conversational speech. We found that 19% of the contributions in the Dutch corpus, and 16% of contributions in the German corpus were backchannels. In addition, many turns began with fillers or particles, most often translation equivalents of “yes” or “no,” which are likewise easy to plan.We proposed that to generate comprehensive models of using language in conversation psycholinguists should study not only the generation and processing of content words, as is commonly done, but also consider backchannels, fillers, and particles. -
Kösem, A., Bosker, H. R., Jensen, O., Hagoort, P., & Riecke, L. (2020). Biasing the perception of spoken words with transcranial alternating current stimulation. Journal of Cognitive Neuroscience, 32(8), 1428-1437. doi:10.1162/jocn_a_01579.
Abstract
Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception (Kösem et al. 2018). We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating
current stimulation (tACS) at distinct oscillatory frequencies (3 Hz and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words
were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and
therewith how speech is perceptually sampled, leading to a perceptual over- or underestimation of the vowel’s duration. Whereas results from Experiment 1 did not confirm this prediction, results from experiment 2 suggested a small effect of tACS frequency on target word
perception: Faster tACS lead to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in experiment 1 vs. experiment 2, suggesting that the
impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations
might be a prerequisite for tACS to be effective.Additional information
Data availability -
Lei, L., Raviv, L., & Alday, P. M. (2020). Using spatial visualizations and real-world social networks to understand language evolution and change. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 252-254). Nijmegen: The Evolution of Language Conferences. -
Lev-Ari, S., & Sebanz, N. (2020). Interacting with multiple partners improves communication skills. Cognitive Science, 44(4): e12836. doi:10.1111/cogs.12836.
Abstract
Successful communication is important for both society and people’s personal life. Here we show that people can improve their communication skills by interacting with multiple others, and that this improvement seems to come about by a greater tendency to take the addressee’s perspective when there are multiple partners. In Experiment 1, during a training phase, participants described figures to a new partner in each round or to the same partner in all rounds. Then all participants interacted with a new partner and their recordings from that round were presented to naïve listeners. Participants who had interacted with multiple partners during training were better understood. This occurred despite the fact that the partners had not provided the participants with any input other than feedback on comprehension during the interaction. In Experiment 2, participants were asked to provide descriptions to a different future participant in each round or to the same future participant in all rounds. Next they performed a surprise memory test designed to tap memory for global details, in line with the addressee’s perspective. Those who had provided descriptions for multiple future participants performed better. These results indicate that people can improve their communication skills by interacting with multiple people, and that this advantage might be due to a greater tendency to take the addressee’s perspective in such cases. Our findings thus show how the social environment can influence our communication skills by shaping our own behavior during interaction in a manner that promotes the development of our communication skills. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2020). Eye-tracking the time course of distal and global speech rate effects. Journal of Experimental Psychology: Human Perception and Performance, 46(10), 1148-1163. doi:10.1037/xhp0000838.
Abstract
To comprehend speech sounds, listeners tune in to speech rate information in the proximal (immediately adjacent), distal (non-adjacent), and global context (further removed preceding and following sentences). Effects of global contextual speech rate cues on speech perception have been shown to follow constraints not found for proximal and distal speech rate. Therefore, listeners may process such global cues at distinct time points during word recognition. We conducted a printed-word eye-tracking experiment to compare the time courses of distal and global rate effects. Results indicated that the distal rate effect emerged immediately after target sound presentation, in line with a general-auditory account. The global rate effect, however, arose more than 200 ms later than the distal rate effect, indicating that distal and global context effects involve distinct processing mechanisms. Results are interpreted in a two-stage model of acoustic context effects. This model posits that distal context effects involve very early perceptual processes, while global context effects arise at a later stage, involving cognitive adjustments conditioned by higher-level information. -
Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.
Abstract
In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.Additional information
Supplementary data and scripts -
Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.
Abstract
Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning. -
Raviv, L. (2020). Language and society: How social pressures shape grammatical structure. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). Network structure and the cultural evolution of linguistic structure: A group communication experiment. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 359-361). Nijmegen: The Evolution of Language Conferences. -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.
Abstract
Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift). -
Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.
Abstract
That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.Additional information
Supplemental material -
Rodd, J. (2020). How speaking fast is like running: Modelling control of speaking rate. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.
Abstract
Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual. -
Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.
Abstract
In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings. -
Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.
Abstract
This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia. -
Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.
Abstract
Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.
Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.
Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.
Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control. -
Thompson, B., Raviv, L., & Kirby, S. (2020). Complexity can be maintained in small populations: A model of lexical variability in emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 440-442). Nijmegen: The Evolution of Language Conferences. -
Van Os, M., De Jong, N. H., & Bosker, H. R. (2020). Fluency in dialogue: Turn‐taking behavior shapes perceived fluency in native and nonnative speech. Language Learning, 70(4), 1183-1217. doi:10.1111/lang.12416.
Abstract
Fluency is an important part of research on second language learning, but most research on language proficiency typically has not included oral fluency as part of interaction, even though natural communication usually occurs in conversations. The present study considered aspects of turn-taking behavior as part of the construct of fluency and investigated whether these aspects differentially influence perceived fluency ratings of native and non-native speech. Results from two experiments using acoustically manipulated speech showed that, in native speech, too ‘eager’ (interrupting a question with a fast answer) and too ‘reluctant’ answers (answering slowly after a long turn gap) negatively affected fluency ratings. However, in non-native speech, only too ‘reluctant’ answers led to lower fluency ratings. Thus, we demonstrate that acoustic properties of dialogue are perceived as part of fluency. By adding to our current understanding of dialogue fluency, these lab-based findings carry implications for language teaching and assessmentAdditional information
data + R analysis script via osf -
Van Lipzig, E. v., Creemers, A., & Don, J. (2020). Morphological processing in nominalizations. Linguistics in the Netherlands, 37, 165-179. doi:10.1075/avt.00044.lip.
Abstract
A major debate in psycholinguistics concerns the representation of morphological structure in the mental lexicon. We report the results of an auditory primed lexical decision experiment in which we tested whether verbs prime their nominalizations in Dutch. We find morphological priming effects with regular nominalizations (schorsen ‘suspend’ → schorsing ‘suspension’) as well as with irregular nominalizations (schieten ‘shoot’ → schot ‘shot’). On this basis, we claim that, despite the lack of phonological identity between stem and derivation in the case of irregular nominalizations, the morphological relation between the two forms, suffices to evoke a priming effect. However, an alternative explanation, according to which the semantic relation in combination with the phonological overlap accounts for the priming effect, cannot be excluded -
Zormpa, E. (2020). Memory for speaking and listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Bosker, H. R., Tjiong, V., Quené, H., Sanders, T., & De Jong, N. H. (2015). Both native and non-native disfluencies trigger listeners' attention. In Disfluency in Spontaneous Speech: DISS 2015: An ICPhS Satellite Meeting. Edinburgh: DISS2015.
Abstract
Disfluencies, such as uh and uhm, are known to help the listener in speech comprehension. For instance, disfluencies may elicit prediction of less accessible referents and may trigger listeners’ attention to the following word. However, recent work suggests differential processing of disfluencies in native and non-native speech. The current study investigated whether the beneficial effects of disfluencies on listeners’ attention are modulated by the (non-)native identity of the speaker. Using the Change Detection Paradigm, we investigated listeners’ recall accuracy for words presented in disfluent and fluent contexts, in native and non-native speech. We observed beneficial effects of both native and non-native disfluencies on listeners’ recall accuracy, suggesting that native and non-native disfluencies trigger listeners’ attention in a similar fashion. -
Bosker, H. R., & Reinisch, E. (2015). Normalization for speechrate in native and nonnative speech. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.Abstract
Speech perception involves a number of processes that deal with variation in the speech signal. One such process is normalization for speechrate: local temporal cues are perceived relative to the rate in the surrounding context. It is as yet unclear whether and how this perceptual effect interacts with higher level impressions of rate, such as a speaker’s nonnative identity. Nonnative speakers typically speak more slowly than natives, an experience that listeners take into account when explicitly judging the rate of nonnative speech. The present study investigated whether this is also reflected in implicit rate normalization. Results indicate that nonnative speech is implicitly perceived as faster than temporally-matched native speech, suggesting that the additional cognitive load of listening to an accent speeds up rate perception. Therefore, rate perception in speech is not dependent on syllable durations alone but also on the ease of processing of the temporal signal. -
Brown-Schmidt, S., & Konopka, A. E. (2015). Processes of incremental message planning during conversation. Psychonomic Bulletin & Review, 22, 833-843. doi:10.3758/s13423-014-0714-2.
Abstract
Speaking begins with the formulation of an intended preverbal message and linguistic encoding of this information. The transition from thought to speech occurs incrementally, with cascading planning at subsequent levels of production. In this article, we aim to specify the mechanisms that support incremental message preparation. We contrast two hypotheses about the mechanisms responsible for incorporating message-level information into a linguistic plan. According to the Initial Preparation view, messages can be encoded as fluent utterances if all information is ready before speaking begins. By contrast, on the Continuous Incrementality view, messages can be continually prepared and updated throughout the production process, allowing for fluent production even if new information is added to the message while speaking is underway. Testing these hypotheses, eye-tracked speakers in two experiments produced unscripted, conjoined noun phrases with modifiers. Both experiments showed that new message elements can be incrementally incorporated into the utterance even after articulation begins, consistent with a Continuous Incrementality view of message planning, in which messages percolate to linguistic encoding immediately as that information becomes available in the mind of the speaker. We conclude by discussing the functional role of incremental message planning in conversational speech and the situations in which this continuous incremental planning would be most likely to be observedAdditional information
http://static-content.springer.com/esm/art%3A10.3758%2Fs13423-014-0714-2/MediaO… -
Chen, A. (2015). Children’s use of intonation in reference and the role of input. In L. Serratrice, & S. E. M. Allen (
Eds. ), The acquisition of reference (pp. 83-104). Amsterdam: Benjamins.Abstract
Studies on children’s use of intonation in reference are few in number but are diverse in terms of theoretical frameworks and intonational parameters. In the current review, I present a re-analysis of the referents in each study, using a three-dimension approach (i.e. referential givenness-newness, relational givenness-newness, contrast), discuss the use of intonation at two levels (phonetic, phonological), and compare findings from different studies within a single framework. The patterns stemming from these studies may be limited in generalisability but can serve as initial hypotheses for future work. Furthermore, I examine the role of input as available in infant direct speech in the acquisition of intonational encoding of referents. In addition, I discuss how future research can advance our knowledge.Files private
Request files -
Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).
Abstract
Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge. -
Gilbers, S., Fuller, C., Gilbers, D., Broersma, M., Goudbeek, M., Free, R., & Başkent, D. (2015). Normal-hearing listeners' and cochlear implant users' perception of pitch cues in emotional speech. i-Perception, 6(5), 1-19. doi:0.1177/0301006615599139.
Abstract
In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study's aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings' pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners' and 20 CI users' emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor's emotions worse than the other actors'. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker's deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues -
Hintz, F., & Huettig, F. (2015). The complexity of the visual environment modulates language-mediated eye gaze. In R. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and Vision in Language Processing (pp. 39-55). Berlin: Springer. doi:10.1007/978-81-322-2443-3_3.Abstract
Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. -
Hintz, F. (2015). Predicting language in different contexts: The nature and limits of mechanisms in anticipatory language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.
Abstract
The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.Additional information
Data availability -
Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.
Abstract
It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. -
Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.
Abstract
The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. -
Huettig, F., Srinivasan, N., & Mishra, R. (2015). Introduction to 'Attention and vision in language processing'. In R. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and vision in language processing. (pp. V-IX). Berlin: Springer. -
Huettig, F. (2015). Literacy influences cognitive abilities far beyond the mastery of written language. In I. van de Craats, J. Kurvers, & R. van Hout (
Eds. ), Adult literacy, second language, and cognition. LESLLA Proceedings 2014. Nijmegen: Centre for Language Studies.Abstract
Recent experimental evidence from cognitive psychology and cognitive neuroscience shows that reading acquisition has non-trivial consequences for cognitive processes other than reading per se. In the present chapter I present evidence from three areas of cognition: phonological processing, prediction in language processing, and visual search. These findings suggest that literacy on cognition influences are far-reaching. This implies that a good understanding of the dramatic impact of literacy acquisition on the human mind is an important prerequisite for successful education policy development and guidance of educational support. -
Jongman, S. R., Roelofs, A., & Meyer, A. S. (2015). Sustained attention in language production: An individual differences investigation. Quarterly Journal of Experimental Psychology, 68, 710-730. doi:10.1080/17470218.2014.964736.
Abstract
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that some form of attention is required. Here, we investigated the contribution of sustained attention, which is the ability to maintain alertness over time. First, the sustained attention ability of participants was measured using auditory and visual continuous performance tasks. Next, the participants described pictures using simple noun phrases while their response times (RTs) and gaze durations were measured. Earlier research has suggested that gaze duration reflects language planning processes up to and including phonological encoding. Individual differences in sustained attention ability correlated with individual differences in the magnitude of the tail of the RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. These results suggest that language production requires sustained attention, especially after phonological encoding. -
Jongman, S. R., Meyer, A. S., & Roelofs, A. (2015). The role of sustained attention in the production of conjoined noun phrases: An individual differences study. PLoS One, 10(9): e0137557. doi:10.1371/journal.pone.0137557.
Abstract
It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed. -
Konopka, A. E., & Kuchinsky, S. E. (2015). How message similarity shapes the timecourse of sentence formulation. Journal of Memory and Language, 84, 1-23. doi:10.1016/j.jml.2015.04.003.
-
Lev-Ari, S. (2015). Adjusting the manner of language processing to the social context: Attention allocation during interactions with non-native speakers. In R. K. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and Vision in Language Processing (pp. 185-195). New York: Springer. doi:10.1007/978-81-322-2443-3_11. -
Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.
Abstract
Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakersAdditional information
Data Sheet 1.docx -
Mishra, R., Srinivasan, N., & Huettig, F. (
Eds. ). (2015). Attention and vision in language processing. Berlin: Springer. doi:10.1007/978-81-322-2443-3. -
Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.Abstract
Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood. -
Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.
Abstract
Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning. -
Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.
Abstract
The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation. -
Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.Abstract
To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons. -
Piai, V., Roelofs, A., Rommers, J., & Maris, E. (2015). Beta oscillations reflect memory and motor aspects of spoken word production. Human brain mapping, 36(7), 2767-2780. doi:10.1002/hbm.22806.
Abstract
Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band.Additional information
hbm22806-sup-0001-suppinfo1.docxFiles private
Request files -
Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.
Abstract
During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing. -
Scharenborg, O., Weber, A., & Janse, E. (2015). Age and hearing loss and the use of acoustic cues in fricative categorization. The Journal of the Acoustical Society of America, 138(3), 1408-1417. doi:10.1121/1.4927728.
Abstract
This study examined the use of fricative noise information and coarticulatory cues for categorization of word-final fricatives [s] and [f] by younger and older Dutch listeners alike. Particularly, the effect of information loss in the higher frequencies on the use of these two cues for fricative categorization was investigated. If information in the higher frequencies is less strongly available, fricative identification may be impaired or listeners may learn to focus more on coarticulatory information. The present study investigates this second possibility. Phonetic categorization results showed that both younger and older Dutch listeners use the primary cue fricative noise and the secondary cue coarticulatory information to distinguish
word-final [f] from [s]. Individual hearing sensitivity in the older listeners modified the use of fricative noise information, but did not modify the use of coarticulatory information. When high-frequency information was filtered out from the speech signal, fricative noise could no longer be used by the younger and older adults. Crucially, they also did not learn to rely more on coarticulatory information as a compensatory cue for fricative categorization. This suggests that listeners do not readily show compensatory use of this secondary cue to fricative identity when fricative categorization becomes difficult. -
Scharenborg, O., Weber, A., & Janse, E. (2015). The role of attentional abilities in lexically guided perceptual learning by older listeners. Attention, Perception & Psychophysics, 77(2), 493-507. doi:10.3758/s13414-014-0792-2.
Abstract
This study investigates two variables that may modify lexically-guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically-guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically-guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically-guided perceptual learning effects. Our results, moreover, clearly show that lexically-guided perceptual learning is not lost when acoustic processing is less accurate. -
Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.Abstract
Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants. -
Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.
Abstract
In different tasks involving action perception, performance has been found to be facilitated
when the presented stimuli were produced by the participants themselves rather than by
another participant. These results suggest that the same mental representations are
accessed during both production and perception. However, with regard to spoken word perception,
evidence also suggests that listeners’ representations for speech reflect the input
from their surrounding linguistic community rather than their own idiosyncratic productions.
Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners
to frame their interpretations of incoming speech signals with regard to speaker identity.
In order to determine whether word recognition evinces similar self-advantages as found in
action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech
representations accessed during perception of noise-vocoded speech are more reflective
of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech. -
Shao, Z., Roelofs, A., Martin, R., & Meyer, A. S. (2015). Selective inhibition and naming performance in semantic blocking, picture-word interference, and color-word stroop tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41, 1806-1820. doi:10.1037/a0039363.
Abstract
In two studies, we examined whether explicit distractors are necessary and sufficient toevoke selective inhibition in three naming tasks: the semantic blocking, picture-word interference, and color-word Stroop task. Delta plots were used to quantify the size of the interference effects as a function of reaction time (RT). Selective inhibition was operationalized as the decrease in the size of the interference effect as a function of naming RT. For all naming tasks, mean naming RTs were significantly longer in the interference condition than in a control condition. The slopes of the interference effects for the longest naming RTs correlated with the magnitude of the mean interference effect in both the semantic blocking task and the picture-word interference task, suggesting that selective inhibition was involved to reduce the interference from strong semantic competitors either invoked by a single explicit competitor or strong implicit competitors in picture naming. However, there was no correlation between the slopes and the mean interference effect in the Stroop task, suggesting less importance of selective inhibition in this task despite explicit distractors. Whereas the results of the semantic blocking task suggest that an explicit distractor is not necessary for triggering inhibition, the results of the Stroop task suggest that such a distractor is not sufficient for evoking inhibition either. -
Sjerps, M. J., & Reinisch, E. (2015). Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 710-722. doi:10.1037/a0039028.
Abstract
Listeners have to overcome variability of the speech signal that can arise, for example, because of differences in room acoustics, differences in speakers’ vocal tract properties, or idiosyncrasies in pronunciation. Two mechanisms that are involved in resolving such variation are perceptually contrastive effects that arise from surrounding acoustic context and lexically guided perceptual learning. Although both processes have been studied in great detail, little attention has been paid to how they operate relative to each other in speech perception. The present study set out to address this issue. The carrier parts of exposure stimuli of a classical perceptual learning experiment were spectrally filtered such that the acoustically ambiguous final fricatives sounded relatively more like the lexically intended sound (Experiment 1) or the alternative (Experiment 2). Perceptual learning was found only in the latter case. The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in terms of cognitive processing levels as well -
Sjerps, M. J., & Meyer, A. S. (2015). Variation in dual-task performance reveals late initiation of speech planning in turn-taking. Cognition, 136, 304-324. doi:10.1016/j.cognition.2014.10.008.
Abstract
The smooth transitions between turns in natural conversation suggest that speakers often begin to plan their utterances while listening to their interlocutor. The presented study investigates whether this is indeed the case and, if so, when utterance planning begins. Two hypotheses were contrasted: that speakers begin to plan their turn as soon as possible (in our experiments less than a second after the onset of the interlocutor’s turn), or that they do so close to the end of the interlocutor’s turn. Turn-taking was combined with a finger tapping task to measure variations in cognitive load. We assumed that the onset of speech planning in addition to listening would be accompanied by deterioration in tapping performance. Two picture description experiments were conducted. In both experiments there were three conditions: (1) Tapping and Speaking, where participants tapped a complex pattern while taking over turns from a pre-recorded speaker, (2) Tapping and Listening, where participants carried out the tapping task while overhearing two pre-recorded speakers, and (3) Speaking Only, where participants took over turns as in the Tapping and Speaking condition but without tapping. The experiments differed in the amount of tapping training the participants received at the beginning of the session. In Experiment 2, the participants’ eye-movements were recorded in addition to their speech and tapping. Analyses of the participants’ tapping performance and eye movements showed that they initiated the cognitively demanding aspects of speech planning only shortly before the end of the turn of the preceding speaker. We argue that this is a smart planning strategy, which may be the speakers’ default in many everyday situations. -
Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Smorenburg, L., Rodd, J., & Chen, A. (2015). The effect of explicit training on the prosodic production of L2 sarcasm by Dutch learners of English. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.Abstract
Previous research [9] suggests that Dutch learners of (British) English are not able to express sarcasm prosodically in their L2. The present study investigates whether explicit training on the prosodic markers of sarcasm in English can improve learners’ realisation of sarcasm. Sarcastic speech was elicited in short simulated telephone conversations between Dutch advanced learners of English and a native British English-speaking ‘friend’ in two sessions, fourteen days apart. Between the two sessions, participants were trained by means of (1) a presentation, (2) directed independent practice, and (3) evaluation of participants’ production and individual feedback in small groups. L1 British English-speaking raters subsequently evaluated the degree of sarcastic sounding in the participants’ responses on a five-point scale. It was found that significantly higher sarcasm ratings were given to L2 learners’ production obtained after the training than that obtained before the training; explicit training on prosody has a positive effect on learners’ production of sarcasm.Additional information
http://www.icphs2015.info/pdfs/Papers/ICPHS0959.pdf -
Tarenskeen, S., Broersma, M., & Geurts, B. (2015). Overspecification of color, pattern, and size: Salience, absoluteness, and consistency. Frontiers in Psychology, 6: 1703. doi:10.3389/fpsyg.2015.01703.
Abstract
The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of color overspecification. This indicates that although many speakers are more likely to include color than pattern (probably because color is more salient), they may also treat pattern like color due to a tendency toward consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend to refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced. -
Van de Velde, M., Kempen, G., & Harbusch, K. (2015). Dative alternation and planning scope in spoken language: A corpus study on effects of verb bias in VO and OV clauses of Dutch. Lingua, 165, 92-108. doi:10.1016/j.lingua.2015.07.006.
Abstract
The syntactic structure of main and subordinate clauses is determined to a considerable extent by verb biases. For example, some English and Dutch ditransitive verbs have a preference for the prepositional object dative, whereas others are typically used with the double object dative. In this study, we compare the effect of these biases on structure selection in (S)VO and (S)OV dative clauses in the Corpus of Spoken Dutch (CGN). This comparison allowed us to make inferences about the size of the advance planning scope during spontaneous speaking: If the verb is an obligatory component of clause-level advance planning scope, as is claimed by the hypothesis of hierarchical incrementality, then biases should exert their influence on structure choices, regardless of early (VO) or late (OV) position of the verb in the clause. Conversely, if planning proceeds in a piecemeal fashion, strictly guided by lexical availability, as claimed by linear incrementality, then the verb and its associated biases can only influence structure choices in VO sentences. We tested these predictions by analyzing structure choices in the CGN, using mixed logit models. Our results support a combination of linear and hierarchical incrementality, showing a significant influence of verb bias on structure choices in VO, and a weaker (but still significant) effect in OV clauses -
Van de Velde, M. (2015). Incrementality and flexibility in sentence production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Veenstra, A., Meyer, A. S., & Acheson, D. J. (2015). Effects of parallel planning on agreement production. Acta Psychologica, 162, 29-39. doi:10.1016/j.actpsy.2015.09.011.
Abstract
An important issue in current psycholinguistics is how the time course of utterance planning affects the generation of grammatical structures. The current study investigated the influence of parallel activation of the components of complex noun phrases on the generation of subject-verb agreement. Specifically, the lexical interference account (Gillespie, M. and Pearlmutter, N. J., 2011b and Solomon, E. S. and Pearlmutter, N. J., 2004) predicts more agreement errors (i.e., attraction) for subject phrases in which the head and local noun mismatch in number (e.g., the apple next to the pears) when nouns are planned in parallel than when they are planned in sequence. We used a speeded picture description task that yielded sentences such as the apple next to the pears is red. The objects mentioned in the noun phrase were either semantically related or unrelated. To induce agreement errors, pictures sometimes mismatched in number. In order to manipulate the likelihood of parallel processing of the objects and to test the hypothesized relationship between parallel processing and the rate of agreement errors, the pictures were either placed close together or far apart. Analyses of the participants' eye movements and speech onset latencies indicated slower processing of the first object and stronger interference from the related (compared to the unrelated) second object in the close than in the far condition. Analyses of the agreement errors yielded an attraction effect, with more errors in mismatching than in matching conditions. However, the magnitude of the attraction effect did not differ across the close and far conditions. Thus, spatial proximity encouraged parallel processing of the pictures, which led to interference of the associated conceptual and/or lexical representation, but, contrary to the prediction, it did not lead to more attraction errors. -
Verhees, M. W. F. T., Chwilla, D. J., Tromp, J., & Vissers, C. T. W. M. (2015). Contributions of emotional state and attention to the processing of syntactic agreement errors: evidence from P600. Frontiers in Psychology, 6: 388. doi:10.3389%2Ffpsyg.2015.00388.
Abstract
The classic account of language is that language processing occurs in isolation from other cognitive systems, like perception, motor action, and emotion. The central theme of this paper is the relationship between a participant’s emotional state and language comprehension. Does emotional context affect how we process neutral words? Recent studies showed that processing of word meaning – traditionally conceived as an automatic process – is affected by emotional state. The influence of emotional state on syntactic processing is less clear. One study reported a mood-related P600 modulation, while another study did not observe an effect of mood on syntactic processing. The goals of this study were: First, to clarify whether and if so how mood affects syntactic processing. Second, to shed light on the underlying mechanisms by separating possible effects of mood from those of attention on syntactic processing. Event-related potentials (ERPs) were recorded while participants read syntactically correct or incorrect sentences. Mood (happy vs. sad) was manipulated by presenting film clips. Attention was manipulated by directing attention to syntactic features vs. physical features. The mood induction was effective. Interactions between mood, attention and syntactic correctness were obtained, showing that mood and attention modulated P600. The mood manipulation led to a reduction in P600 for sad as compared to happy mood when attention was directed at syntactic features. The attention manipulation led to a reduction in P600 when attention was directed at physical features compared to syntactic features for happy mood. From this we draw two conclusions: First, emotional state does affect syntactic processing. We propose mood-related differences in the reliance on heuristics as the underlying mechanism. Second, attention can contribute to emotion-related ERP effects in syntactic language processing. Therefore, future studies on the relation between language and emotion will have to control for effects of attention
Share this page