James McQueen

Publications

Displaying 1 - 28 of 28
  • Bakker-Marshall, I., Takashima, A., Fernandez, C. B., Janzen, G., McQueen, J. M., & Van Hell, J. G. (2021). Overlapping and distinct neural networks supporting novel word learning in bilinguals and monolinguals. Bilingualism: Language and Cognition, 24(3), 524-536. doi:10.1017/S1366728920000589.

    Abstract

    This study investigated how bilingual experience alters neural mechanisms supporting novel word learning. We hypothesised that novel words elicit increased semantic activation in the larger bilingual lexicon, potentially stimulating stronger memory integration than in monolinguals. English monolinguals and Spanish–English bilinguals were trained on two sets of written Swahili–English word pairs, one set on each of two consecutive days, and performed a recognition task in the MRI-scanner. Lexical integration was measured through visual primed lexical decision. Surprisingly, no group difference emerged in explicit word memory, and priming occurred only in the monolingual group. This difference in lexical integration may indicate an increased need for slow neocortical interleaving of old and new information in the denser bilingual lexicon. The fMRI data were consistent with increased use of cognitive control networks in monolinguals and of articulatory motor processes in bilinguals, providing further evidence for experience-induced neural changes: monolinguals and bilinguals reached largely comparable behavioural performance levels in novel word learning, but did so by recruiting partially overlapping but non-identical neural systems to acquire novel words.
  • Goriot, C., Unsworth, S., Van Hout, R. W. N. M., Broersma, M., & McQueen, J. M. (2021). Differences in phonological awareness performance: Are there positive or negative effects of bilingual experience? Linguistic Approaches to Bilingualism, 11(3), 425-460. doi:10.1075/lab.18082.gor.

    Abstract

    Children who have knowledge of two languages may show better phonological awareness than their monolingual peers (e.g. Bruck & Genesee, 1995). It remains unclear how much bilingual experience is needed for such advantages to appear, and whether differences in language or cognitive skills alter the relation between bilingualism and phonological awareness. These questions were investigated in this cross-sectional study. Participants (n = 294; 4–7 year-olds, in the first three grades of primary school) were Dutch-speaking pupils attending mainstream monolingual Dutch primary schools or early-English schools providing English lessons from grade 1, and simultaneous Dutch-English bilinguals. We investigated phonological awareness (rhyming, phoneme blending, onset phoneme identification, and phoneme deletion) and its relation to age, Dutch vocabulary, English vocabulary, working memory and short-term memory, and the balance between Dutch and English vocabulary. Small significant (α < .05) effects of bilingualism were found on onset phoneme identification and phoneme deletion, but post-hoc comparisons revealed no robust pairwise differences between the groups. Furthermore, effects of bilingualism sometimes disappeared when differences in language or memory skills were taken into account. Learning two languages simultaneously is not beneficial to – and importantly, also not detrimental to – phonological awareness.

    Files private

    Request files
  • Goriot, C., Van Hout, R., Broersma, M., Lobo, V., McQueen, J. M., & Unsworth, S. (2021). Using the peabody picture vocabulary test in L2 children and adolescents: Effects of L1. International Journal of Bilingual Education and Bilingualism, 24(4), 546-568. doi:10.1080/13670050.2018.1494131.

    Abstract

    This study investigated to what extent the Peabody Picture Vocabulary Test
    (PPVT-4) is a reliable tool for measuring vocabulary knowledge of English as
    a second language (L2), and to what extent L1 characteristics affect test
    outcomes. The PPVT-4 was administered to Dutch pupils in six different
    age groups (4-15 years old) who were or were not following an English
    educational programme at school. Our first finding was that the PPVT-4
    was not a reliable measure for pupils who were correct on maximally 24
    items, but it was reliable for pupils who performed better. Second, both
    primary-school and secondary-school pupils performed better on items
    for which the phonological similarity between the English word and its
    Dutch translation was higher. Third, young unexperienced L2 learners’
    scores were predicted by Dutch lexical frequency, while older more
    experienced pupils’ scores were predicted by English frequency. These
    findings indicate that the PPVT may be inappropriate for use with L2
    learners with limited L2 proficiency. Furthermore, comparisons of PPVT
    scores across learners with different L1s are confounded by effects of L1
    frequency and L1-L2 similarity. The PPVT-4 is however a suitable measure
    to compare more proficient L2 learners who have the same L1.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). The effects of onset and offset masking on the time course of non-native spoken-word recognition in noise. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 133-139). Vienna: Cognitive Science Society.

    Abstract

    Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.

    Additional information

    Link to Preprint on BioRxiv
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Severijnen, G. G. A., Bosker, H. R., Piai, V., & McQueen, J. M. (2021). Listeners track talker-specific prosody to deal with talker-variability. Brain Research, 1769: 147605. doi:10.1016/j.brainres.2021.147605.

    Abstract

    One of the challenges in speech perception is that listeners must deal with considerable
    segmental and suprasegmental variability in the acoustic signal due to differences between talkers. Most previous studies have focused on how listeners deal with segmental variability.
    In this EEG experiment, we investigated whether listeners track talker-specific usage of suprasegmental cues to lexical stress to recognize spoken words correctly. In a three-day training phase, Dutch participants learned to map non-word minimal stress pairs onto different object referents (e.g., USklot meant “lamp”; usKLOT meant “train”). These non-words were
    produced by two male talkers. Critically, each talker used only one suprasegmental cue to signal stress (e.g., Talker A used only F0 and Talker B only intensity). We expected participants to learn which talker used which cue to signal stress. In the test phase, participants indicated whether spoken sentences including these non-words were correct (“The word for lamp is…”).
    We found that participants were slower to indicate that a stimulus was correct if the non-word was produced with the unexpected cue (e.g., Talker A using intensity). That is, if in training Talker A used F0 to signal stress, participants experienced a mismatch between predicted and perceived phonological word-forms if, at test, Talker A unexpectedly used intensity to cue
    stress. In contrast, the N200 amplitude, an event-related potential related to phonological
    prediction, was not modulated by the cue mismatch. Theoretical implications of these
    contrasting results are discussed. The behavioral findings illustrate talker-specific prediction of prosodic cues, picked up through perceptual learning during training.
  • Tartaro, G., Takashima, A., & McQueen, J. M. (2021). Consolidation as a mechanism for word learning in sequential bilinguals. Bilingualism: Language and Cognition, 24(5), 864-878. doi:10.1017/S1366728921000286.

    Abstract

    First-language research suggests that new words, after initial episodic-memory encoding, are consolidated and hence become lexically integrated. We asked here if lexical consolidation, about word forms and meanings, occurs in a second language. Italian–English sequential bilinguals learned novel English-like words (e.g., apricon, taught to mean “stapler”). fMRI analyses failed to reveal a predicted shift, after consolidation time, from hippocampal to temporal neocortical activity. In a pause-detection task, responses to existing phonological competitors of learned words (e.g., apricot for apricon) were slowed down if the words had been learned two days earlier (i.e., after consolidation time) but not if they had been learned the same day. In a lexical-decision task, new words primed responses to semantically-related existing words (e.g., apricon-paper) whether the words were learned that day or two days earlier. Consolidation appears to support integration of words into the bilingual lexicon, possibly more rapidly for meanings than for forms.

    Additional information

    materials, procedure, results
  • Wagner, M. A., Broersma, M., McQueen, J. M., Dhaene, S., & Lemhöfer, K. (2021). Phonetic convergence to non-native speech: Acoustic and perceptual evidence. Journal of Phonetics, 88: 101076. doi:10.1016/j.wocn.2021.101076.

    Abstract

    While the tendency of speakers to align their speech to that of others acoustic-phonetically has been widely studied among native speakers, very few studies have examined whether natives phonetically converge to non-native speakers. Here we measured native Dutch speakers’ convergence to a non-native speaker with an unfamiliar accent in a novel non-interactive task. Furthermore, we assessed the role of participants’ perceptions of the non-native accent in their tendency to converge. In addition to a perceptual measure (AXB ratings), we examined convergence on different acoustic dimensions (e.g., vowel spectra, fricative CoG, speech rate, overall f0) to determine what dimensions, if any, speakers converge to. We further combined these two types of measures to discover what dimensions weighed in raters’ judgments of convergence. The results reveal overall convergence to our non-native speaker, as indexed by both perceptual and acoustic measures. However, the ratings suggest the stronger participants rated the non-native accent to be, the less likely they were to converge. Our findings add to the growing body of evidence that natives can phonetically converge to non-native speech, even without any apparent socio-communicative motivation to do so. We argue that our results are hard to integrate with a purely social view of convergence.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. In Abstracts of Laboratory Phonology 12 (pp. 115-116).
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2010). How abstract phonemic categories are necessary for coping with speaker-related variation. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory phonology 10 (pp. 91-111). Berlin: de Gruyter.
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.

    Abstract

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words.
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). Rapid and long-lasting adaptation to foreign-accented speech [Abstract]. Journal of the Acoustical Society of America, 128, 2486.

    Abstract

    In foreign-accented speech, listeners have to handle noticeable deviations from the standard pronunciation of a target language. Three cross-modal priming experiments investigated how short- and long-term experiences with a foreign accent influence word recognition by native listeners. In experiment 1, German-accented words were presented to Dutch listeners who had either extensive or limited prior experience with German-accented Dutch. Accented words either contained a diphthong substitution that deviated acoustically quite largely from the canonical form (huis [hys], "house", pronounced as [hoys]), or that deviated acoustically to a lesser extent (lijst [lst], "list", pronounced as [lst]). The mispronunciations never created lexical ambiguity in Dutch. While long-term experience facilitated word recognition for both types of substitutions, limited experience facilitated recognition only of words with acoustically smaller deviations. In experiment 2, Dutch listeners with limited experience listened to the German speaker for 4 min before participating in the cross-modal priming experiment. The results showed that speaker-specific learning effects for acoustically large deviations can be obtained already after a brief exposure, as long as the exposure contains evidence of the deviations. Experiment 3 investigates whether these short-term adaptation effects for foreign-accented speech are speaker-independent.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Norris, D., Cutler, A., McQueen, J. M., Butterfield, S., & Kearns, R. K. (2000). Language-universal constraints on the segmentation of English. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 43-46). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., Cutler, A., & McQueen, J. M. (2000). The optimal architecture for simulating spoken-word recognition. In C. Davis, T. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. Adelaide: Causal Productions.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of subcategorical mismatch in word forms. The source of TRACE's failure lay not in interactive connectivity, not in the presence of inter-word competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model, which has inter-word competition, phonemic representations and continuous optimisation (but no interactive connectivity).

Share this page