James McQueen

Publications

Displaying 1 - 26 of 26
  • Goriot, C., McQueen, J. M., Unsworth, S., & Van Hout, R. (2020). Perception of English phonetic contrasts by Dutch children: How bilingual are early-English learners? PLoS One, 15(3): e0229902. doi:10.1371/journal.pone.0229902.

    Abstract

    The aim of this study was to investigate whether early-English education benefits the perception
    of English phonetic contrasts that are known to be perceptually confusable for Dutch
    native speakers, comparing Dutch pupils who were enrolled in an early-English programme
    at school from the age of four with pupils in a mainstream programme with English instruction
    from the age of 11, and English-Dutch early bilingual children. Children were 4-5-yearolds
    (start of primary school), 8-9-year-olds, or 11-12-year-olds (end of primary school).
    Children were tested on four contrasts that varied in difficulty: /b/-/s/ (easy), /k/-/ɡ/ (intermediate),
    /f/-/θ/ (difficult), /ε/-/æ/ (very difficult). Bilingual children outperformed the two other
    groups on all contrasts except /b/-/s/. Early-English pupils did not outperform mainstream
    pupils on any of the contrasts. This shows that early-English education as it is currently
    implemented is not beneficial for pupils’ perception of non-native contrasts.

    Additional information

    Supporting information
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • McQueen, J. M., & Dilley, L. C. (2020). Prosody and spoken-word recognition. In C. Gussenhoven, & A. Chen (Eds.), The Oxford handbook of language prosody (pp. 509-521). Oxford: Oxford University Press.

    Abstract

    This chapter outlines a Bayesian model of spoken-word recognition and reviews how
    prosody is part of that model. The review focuses on the information that assists the lis­
    tener in recognizing the prosodic structure of an utterance and on how spoken-word
    recognition is also constrained by prior knowledge about prosodic structure. Recognition
    is argued to be a process of perceptual inference that ensures that listening is robust to
    variability in the speech signal. In essence, the listener makes inferences about the seg­
    mental content of each utterance, about its prosodic structure (simultaneously at differ­
    ent levels in the prosodic hierarchy), and about the words it contains, and uses these in­
    ferences to form an utterance interpretation. Four characteristics of the proposed
    prosody-enriched recognition model are discussed: parallel uptake of different informa­
    tion types, high contextual dependency, adaptive processing, and phonological abstrac­
    tion. The next steps that should be taken to develop the model are also discussed.
  • McQueen, J. M., Eisner, F., Burgering, M. A., & Vroomen, J. (2020). Specialized memory systems for learning spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(1), 189-199. doi:10.1037/xlm0000704.

    Abstract

    Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound–picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in “speech mode” was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information.

    Additional information

    Supplemental material
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2020). Between-language competition as a driving force in foreign language attrition. Cognition, 198: 104218. doi:10.1016/j.cognition.2020.104218.

    Abstract

    Research in the domain of memory suggests that forgetting is primarily driven by interference and competition from other, related memories. Here we ask whether similar dynamics are at play in foreign language (FL) attrition. We tested whether interference from translation equivalents in other, more recently used languages causes subsequent retrieval failure in L3. In Experiment 1, we investigated whether interference from the native language (L1) and/or from another foreign language (L2) affected L3 vocabulary retention. On day 1, Dutch native speakers learned 40 new Spanish (L3) words. On day 2, they performed a number of retrieval tasks in either Dutch (L1) or English (L2) on half of these words, and then memory for all items was tested again in L3 Spanish. Recall in Spanish was slower and less complete for words that received interference than for words that did not. In naming speed, this effect was larger for L2 compared to L1 interference. Experiment 2 replicated the interference effect and asked if the language difference can be explained by frequency of use differences between native- and non-native languages. Overall, these findings suggest that competition from more recently used languages, and especially other foreign languages, is a driving force behind FL attrition.

    Additional information

    Supplementary data
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Mani, N., Johnson, E., McQueen, J. M., & Huettig, F. (2013). How yellow is your banana? Toddlers' language-mediated visual search in referent-present tasks. Developmental Psychology, 49, 1036-1044. doi:10.1037/a0029382.

    Abstract

    What is the relative salience of different aspects of word meaning in the developing lexicon? The current study examines the time-course of retrieval of semantic and color knowledge associated with words during toddler word recognition: at what point do toddlers orient towards an image of a yellow cup upon hearing color-matching words such as “banana” (typically yellow) relative to unrelated words (e.g., “house”)? Do children orient faster to semantic matching images relative to color matching images, e.g., orient faster to an image of a cookie relative to a yellow cup upon hearing the word “banana”? The results strongly suggest a prioritization of semantic information over color information in children’s word-referent mappings. This indicates that, even for natural objects (e.g., food, animals that are more likely to have a prototypical color), semantic knowledge is a more salient aspect of toddler's word meaning than color knowledge. For 24-month-old Dutch toddlers, bananas are thus more edible than they are yellow.
  • Mitterer, H., Scharenborg, O., & McQueen, J. M. (2013). Phonological abstraction without phonemes in speech perception. Cognition, 129, 356-361. doi:10.1016/j.cognition.2013.07.011.

    Abstract

    Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception
  • Sadakata, M., & McQueen, J. M. (2013). High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates. Journal of the Acoustical Society of America, 134(2), 1324-1335. doi:10.1121/1.4812767.

    Abstract

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2013). Evidence for precategorical extrinsic vowel normalization. Attention, Perception & Psychophysics, 75, 576-587. doi:10.3758/s13414-012-0408-7.

    Abstract

    Three experiments investigated whether extrinsic vowel normalization takes place largely at a categorical or a precategorical level of processing. Traditional vowel normalization effects in categorization were replicated in Experiment 1: Vowels taken from an [ɪ]-[ε] continuum were more often interpreted as /ɪ/ (which has a low first formant, F (1)) when the vowels were heard in contexts that had a raised F (1) than when the contexts had a lowered F (1). This was established with contexts that consisted of only two syllables. These short contexts were necessary for Experiment 2, a discrimination task that encouraged listeners to focus on the perceptual properties of vowels at a precategorical level. Vowel normalization was again found: Ambiguous vowels were more easily discriminated from an endpoint [ε] than from an endpoint [ɪ] in a high-F (1) context, whereas the opposite was true in a low-F (1) context. Experiment 3 measured discriminability between pairs of steps along the [ɪ]-[ε] continuum. Contextual influences were again found, but without discrimination peaks, contrary to what was predicted from the same participants' categorization behavior. Extrinsic vowel normalization therefore appears to be a process that takes place at least in part at a precategorical processing level.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. In Abstracts of Laboratory Phonology 12 (pp. 115-116).
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2010). How abstract phonemic categories are necessary for coping with speaker-related variation. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory phonology 10 (pp. 91-111). Berlin: de Gruyter.
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.

    Abstract

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words.
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). Rapid and long-lasting adaptation to foreign-accented speech [Abstract]. Journal of the Acoustical Society of America, 128, 2486.

    Abstract

    In foreign-accented speech, listeners have to handle noticeable deviations from the standard pronunciation of a target language. Three cross-modal priming experiments investigated how short- and long-term experiences with a foreign accent influence word recognition by native listeners. In experiment 1, German-accented words were presented to Dutch listeners who had either extensive or limited prior experience with German-accented Dutch. Accented words either contained a diphthong substitution that deviated acoustically quite largely from the canonical form (huis [hys], "house", pronounced as [hoys]), or that deviated acoustically to a lesser extent (lijst [lst], "list", pronounced as [lst]). The mispronunciations never created lexical ambiguity in Dutch. While long-term experience facilitated word recognition for both types of substitutions, limited experience facilitated recognition only of words with acoustically smaller deviations. In experiment 2, Dutch listeners with limited experience listened to the German speaker for 4 min before participating in the cross-modal priming experiment. The results showed that speaker-specific learning effects for acoustically large deviations can be obtained already after a brief exposure, as long as the exposure contains evidence of the deviations. Experiment 3 investigates whether these short-term adaptation effects for foreign-accented speech are speaker-independent.

Share this page