James McQueen

Publications

Displaying 1 - 20 of 20
  • Dai, B., McQueen, J. M., Terporten, R., Hagoort, P., & Kösem, A. (2022). Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. Current Research in Neurobiology, 3: 100043. doi:10.1016/j.crneur.2022.100043.

    Abstract

    Listening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to unintelligible sounds. This suggests that the competing linguistic information interferes with the neural processing of target speech. Interference could either arise from a degradation of the neural representation of the target speech, or from increased representation of distracting speech that enters in competition with the target speech. We tested these alternative hypotheses using magnetoencephalography (MEG) while participants listened to a target clear speech in the presence of distracting noise-vocoded speech. Crucially, the distractors were initially unintelligible but became more intelligible after a short training session. Results showed that the comprehension of the target speech was poorer after training than before training. The neural tracking of target speech in the delta range (1–4 Hz) reduced in strength in the presence of a more intelligible distractor. In contrast, the neural tracking of distracting signals was not significantly modulated by intelligibility. These results suggest that the presence of distracting speech signals degrades the linguistic representation of target speech carried by delta oscillations.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2491-2496). Toronto, Canada: Cognitive Science Society.

    Abstract

    Humans differ greatly in their ability to use language. Contemporary psycholinguistic theories assume that individual differences in language skills arise from variability in linguistic experience and in general cognitive skills. While much previous research has tested the involvement of select verbal and non-verbal variables in select domains of linguistic processing, comprehensive characterizations of the relationships among the skills underlying language use are rare. We contribute to such a research program by re-analyzing a publicly available set of data from 112 young adults tested on 35 behavioral tests. The tests assessed nine key constructs reflecting linguistic processing skills, linguistic experience and general cognitive skills. Correlation and hierarchical clustering analyses of the test scores showed that most of the tests assumed to measure the same construct correlated moderately to strongly and largely clustered together. Furthermore, the results suggest important roles of processing speed in comprehension, and of linguistic experience in production.
  • Menks, W. M., Ekerdt, C., Janzen, G., Kidd, E., Lemhöfer, K., Fernández, G., & McQueen, J. M. (2022). Study protocol: A comprehensive multi-method neuroimaging approach to disentangle developmental effects and individual differences in second language learning. BMC Psychology, 10: 169. doi:10.1186/s40359-022-00873-x.

    Abstract

    Background

    While it is well established that second language (L2) learning success changes with age and across individuals, the underlying neural mechanisms responsible for this developmental shift and these individual differences are largely unknown. We will study the behavioral and neural factors that subserve new grammar and word learning in a large cross-sectional developmental sample. This study falls under the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Dutch Research Council]) Language in Interaction consortium (website: https://www.languageininteraction.nl/).
    Methods

    We will sample 360 healthy individuals across a broad age range between 8 and 25 years. In this paper, we describe the study design and protocol, which involves multiple study visits covering a comprehensive behavioral battery and extensive magnetic resonance imaging (MRI) protocols. On the basis of these measures, we will create behavioral and neural fingerprints that capture age-based and individual variability in new language learning. The behavioral fingerprint will be based on first and second language proficiency, memory systems, and executive functioning. We will map the neural fingerprint for each participant using the following MRI modalities: T1‐weighted, diffusion-weighted, resting-state functional MRI, and multiple functional-MRI paradigms. With respect to the functional MRI measures, half of the sample will learn grammatical features and half will learn words of a new language. Combining all individual fingerprints allows us to explore the neural maturation effects on grammar and word learning.
    Discussion

    This will be one of the largest neuroimaging studies to date that investigates the developmental shift in L2 learning covering preadolescence to adulthood. Our comprehensive approach of combining behavioral and neuroimaging data will contribute to the understanding of the mechanisms influencing this developmental shift and individual differences in new language learning. We aim to answer: (I) do these fingerprints differ according to age and can these explain the age-related differences observed in new language learning? And (II) which aspects of the behavioral and neural fingerprints explain individual differences (across and within ages) in grammar and word learning? The results of this study provide a unique opportunity to understand how the development of brain structure and function influence new language learning success.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2022). Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity. In S. Frota, M. Cruz, & M. Vigário (Eds.), Proceedings of Speech Prosody 2022 (pp. 278-282). doi:10.21437/SpeechProsody.2022-57.

    Abstract

    The present study examined two acoustic cues in the production
    of lexical stress in Dutch: spectral tilt and overall intensity.
    Sluijter and Van Heuven (1996) reported that spectral tilt is a
    more reliable cue to stress than intensity. However, that study
    included only a small number of talkers (10) and only syllables
    with the vowels /aː/ and /ɔ/.
    The present study re-examined this issue in a larger and
    more variable dataset. We recorded 38 native speakers of Dutch
    (20 females) producing 744 tokens of Dutch segmentally
    overlapping words (e.g., VOORnaam vs. voorNAAM, “first
    name” vs. “respectable”), targeting 10 different vowels, in
    variable sentence contexts. For each syllable, we measured
    overall intensity and spectral tilt following Sluijter and Van
    Heuven (1996).
    Results from Linear Discriminant Analyses showed that,
    for the vowel /aː/ alone, spectral tilt showed an advantage over
    intensity, as evidenced by higher stressed/unstressed syllable
    classification accuracy scores for spectral tilt. However, when
    all vowels were included in the analysis, the advantage
    disappeared.
    These findings confirm that spectral tilt plays a larger role
    in signaling stress in Dutch /aː/ but show that, for a larger
    sample of Dutch vowels, overall intensity and spectral tilt are
    equally important.
  • Strauß, A., Wu, T., McQueen, J. M., Scharenborg, O., & Hintz, F. (2022). The differential roles of lexical and sublexical processing during spoken-word recognition in clear and in noise. Cortex, 151, 70-88. doi:10.1016/j.cortex.2022.02.011.

    Abstract

    Successful spoken-word recognition relies on an interplay between lexical and sublexical processing. Previous research demonstrated that listeners readily shift between more lexically-biased and more sublexically-biased modes of processing in response to the situational context in which language comprehension takes place. Recognizing words in the presence of background noise reduces the perceptual evidence for the speech signal and – compared to the clear – results in greater uncertainty. It has been proposed that, when dealing with greater uncertainty, listeners rely more strongly on sublexical processing. The present study tested this proposal using behavioral and electroencephalography (EEG) measures. We reasoned that such an adjustment would be reflected in changes in the effects of variables predicting recognition performance with loci at lexical and sublexical levels, respectively. We presented native speakers of Dutch with words featuring substantial variability in (1) word frequency (locus at lexical level), (2) phonological neighborhood density (loci at lexical and sublexical levels) and (3) phonotactic probability (locus at sublexical level). Each participant heard each word in noise (presented at one of three signal-to-noise ratios) and in the clear and performed a two-stage lexical decision and transcription task while EEG was recorded. Using linear mixed-effects analyses, we observed behavioral evidence that listeners relied more strongly on sublexical processing when speech quality decreased. Mixed-effects modelling of the EEG signal in the clear condition showed that sublexical effects were reflected in early modulations of ERP components (e.g., within the first 300 ms post word onset). In noise, EEG effects occurred later and involved multiple regions activated in parallel. Taken together, we found evidence – especially in the behavioral data – supporting previous accounts that the presence of background noise induces a stronger reliance on sublexical processing.
  • Bakker-Marshall, I., Takashima, A., Fernandez, C. B., Janzen, G., McQueen, J. M., & Van Hell, J. G. (2021). Overlapping and distinct neural networks supporting novel word learning in bilinguals and monolinguals. Bilingualism: Language and Cognition, 24(3), 524-536. doi:10.1017/S1366728920000589.

    Abstract

    This study investigated how bilingual experience alters neural mechanisms supporting novel word learning. We hypothesised that novel words elicit increased semantic activation in the larger bilingual lexicon, potentially stimulating stronger memory integration than in monolinguals. English monolinguals and Spanish–English bilinguals were trained on two sets of written Swahili–English word pairs, one set on each of two consecutive days, and performed a recognition task in the MRI-scanner. Lexical integration was measured through visual primed lexical decision. Surprisingly, no group difference emerged in explicit word memory, and priming occurred only in the monolingual group. This difference in lexical integration may indicate an increased need for slow neocortical interleaving of old and new information in the denser bilingual lexicon. The fMRI data were consistent with increased use of cognitive control networks in monolinguals and of articulatory motor processes in bilinguals, providing further evidence for experience-induced neural changes: monolinguals and bilinguals reached largely comparable behavioural performance levels in novel word learning, but did so by recruiting partially overlapping but non-identical neural systems to acquire novel words.
  • Goriot, C., Unsworth, S., Van Hout, R. W. N. M., Broersma, M., & McQueen, J. M. (2021). Differences in phonological awareness performance: Are there positive or negative effects of bilingual experience? Linguistic Approaches to Bilingualism, 11(3), 425-460. doi:10.1075/lab.18082.gor.

    Abstract

    Children who have knowledge of two languages may show better phonological awareness than their monolingual peers (e.g. Bruck & Genesee, 1995). It remains unclear how much bilingual experience is needed for such advantages to appear, and whether differences in language or cognitive skills alter the relation between bilingualism and phonological awareness. These questions were investigated in this cross-sectional study. Participants (n = 294; 4–7 year-olds, in the first three grades of primary school) were Dutch-speaking pupils attending mainstream monolingual Dutch primary schools or early-English schools providing English lessons from grade 1, and simultaneous Dutch-English bilinguals. We investigated phonological awareness (rhyming, phoneme blending, onset phoneme identification, and phoneme deletion) and its relation to age, Dutch vocabulary, English vocabulary, working memory and short-term memory, and the balance between Dutch and English vocabulary. Small significant (α < .05) effects of bilingualism were found on onset phoneme identification and phoneme deletion, but post-hoc comparisons revealed no robust pairwise differences between the groups. Furthermore, effects of bilingualism sometimes disappeared when differences in language or memory skills were taken into account. Learning two languages simultaneously is not beneficial to – and importantly, also not detrimental to – phonological awareness.

    Files private

    Request files
  • Goriot, C., Van Hout, R., Broersma, M., Lobo, V., McQueen, J. M., & Unsworth, S. (2021). Using the peabody picture vocabulary test in L2 children and adolescents: Effects of L1. International Journal of Bilingual Education and Bilingualism, 24(4), 546-568. doi:10.1080/13670050.2018.1494131.

    Abstract

    This study investigated to what extent the Peabody Picture Vocabulary Test
    (PPVT-4) is a reliable tool for measuring vocabulary knowledge of English as
    a second language (L2), and to what extent L1 characteristics affect test
    outcomes. The PPVT-4 was administered to Dutch pupils in six different
    age groups (4-15 years old) who were or were not following an English
    educational programme at school. Our first finding was that the PPVT-4
    was not a reliable measure for pupils who were correct on maximally 24
    items, but it was reliable for pupils who performed better. Second, both
    primary-school and secondary-school pupils performed better on items
    for which the phonological similarity between the English word and its
    Dutch translation was higher. Third, young unexperienced L2 learners’
    scores were predicted by Dutch lexical frequency, while older more
    experienced pupils’ scores were predicted by English frequency. These
    findings indicate that the PPVT may be inappropriate for use with L2
    learners with limited L2 proficiency. Furthermore, comparisons of PPVT
    scores across learners with different L1s are confounded by effects of L1
    frequency and L1-L2 similarity. The PPVT-4 is however a suitable measure
    to compare more proficient L2 learners who have the same L1.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). The effects of onset and offset masking on the time course of non-native spoken-word recognition in noise. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 133-139). Vienna: Cognitive Science Society.

    Abstract

    Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.

    Additional information

    Link to Preprint on BioRxiv
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Severijnen, G. G. A., Bosker, H. R., Piai, V., & McQueen, J. M. (2021). Listeners track talker-specific prosody to deal with talker-variability. Brain Research, 1769: 147605. doi:10.1016/j.brainres.2021.147605.

    Abstract

    One of the challenges in speech perception is that listeners must deal with considerable
    segmental and suprasegmental variability in the acoustic signal due to differences between talkers. Most previous studies have focused on how listeners deal with segmental variability.
    In this EEG experiment, we investigated whether listeners track talker-specific usage of suprasegmental cues to lexical stress to recognize spoken words correctly. In a three-day training phase, Dutch participants learned to map non-word minimal stress pairs onto different object referents (e.g., USklot meant “lamp”; usKLOT meant “train”). These non-words were
    produced by two male talkers. Critically, each talker used only one suprasegmental cue to signal stress (e.g., Talker A used only F0 and Talker B only intensity). We expected participants to learn which talker used which cue to signal stress. In the test phase, participants indicated whether spoken sentences including these non-words were correct (“The word for lamp is…”).
    We found that participants were slower to indicate that a stimulus was correct if the non-word was produced with the unexpected cue (e.g., Talker A using intensity). That is, if in training Talker A used F0 to signal stress, participants experienced a mismatch between predicted and perceived phonological word-forms if, at test, Talker A unexpectedly used intensity to cue
    stress. In contrast, the N200 amplitude, an event-related potential related to phonological
    prediction, was not modulated by the cue mismatch. Theoretical implications of these
    contrasting results are discussed. The behavioral findings illustrate talker-specific prediction of prosodic cues, picked up through perceptual learning during training.
  • Tartaro, G., Takashima, A., & McQueen, J. M. (2021). Consolidation as a mechanism for word learning in sequential bilinguals. Bilingualism: Language and Cognition, 24(5), 864-878. doi:10.1017/S1366728921000286.

    Abstract

    First-language research suggests that new words, after initial episodic-memory encoding, are consolidated and hence become lexically integrated. We asked here if lexical consolidation, about word forms and meanings, occurs in a second language. Italian–English sequential bilinguals learned novel English-like words (e.g., apricon, taught to mean “stapler”). fMRI analyses failed to reveal a predicted shift, after consolidation time, from hippocampal to temporal neocortical activity. In a pause-detection task, responses to existing phonological competitors of learned words (e.g., apricot for apricon) were slowed down if the words had been learned two days earlier (i.e., after consolidation time) but not if they had been learned the same day. In a lexical-decision task, new words primed responses to semantically-related existing words (e.g., apricon-paper) whether the words were learned that day or two days earlier. Consolidation appears to support integration of words into the bilingual lexicon, possibly more rapidly for meanings than for forms.

    Additional information

    materials, procedure, results
  • Wagner, M. A., Broersma, M., McQueen, J. M., Dhaene, S., & Lemhöfer, K. (2021). Phonetic convergence to non-native speech: Acoustic and perceptual evidence. Journal of Phonetics, 88: 101076. doi:10.1016/j.wocn.2021.101076.

    Abstract

    While the tendency of speakers to align their speech to that of others acoustic-phonetically has been widely studied among native speakers, very few studies have examined whether natives phonetically converge to non-native speakers. Here we measured native Dutch speakers’ convergence to a non-native speaker with an unfamiliar accent in a novel non-interactive task. Furthermore, we assessed the role of participants’ perceptions of the non-native accent in their tendency to converge. In addition to a perceptual measure (AXB ratings), we examined convergence on different acoustic dimensions (e.g., vowel spectra, fricative CoG, speech rate, overall f0) to determine what dimensions, if any, speakers converge to. We further combined these two types of measures to discover what dimensions weighed in raters’ judgments of convergence. The results reveal overall convergence to our non-native speaker, as indexed by both perceptual and acoustic measures. However, the ratings suggest the stronger participants rated the non-native accent to be, the less likely they were to converge. Our findings add to the growing body of evidence that natives can phonetically converge to non-native speech, even without any apparent socio-communicative motivation to do so. We argue that our results are hard to integrate with a purely social view of convergence.
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • McQueen, J. M. (2009). Al sprekende leert men [Inaugural lecture]. Arnhem: Drukkerij Roos en Roos.

    Abstract

    Rede uitgesproken bij de aanvaarding van het ambt van hoogleraar Leren en plasticiteit aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 1 oktober 2009
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.

    Abstract

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.
  • Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.

    Abstract

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.

Share this page