Displaying 1 - 38 of 38
-
Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.
Abstract
Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension -
Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.
Abstract
The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia. -
Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.
Abstract
Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required -
Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.
Abstract
One of the most daunting tasks of a listener is to map a
continuous auditory stream onto known speech sound
categories and lexical items. A major issue with this mapping
problem is the variability in the acoustic realizations of sound
categories, both within and across speakers. Past research has
suggested listeners may use visual information (e.g., lipreading)
to calibrate these speech categories to the current
speaker. Previous studies have focused on audiovisual
recalibration of consonant categories. The present study
explores whether vowel categorization, which is known to show
less sharply defined category boundaries, also benefit from
visual cues.
Participants were exposed to videos of a speaker
pronouncing one out of two vowels, paired with audio that was
ambiguous between the two vowels. After exposure, it was
found that participants had recalibrated their vowel categories.
In addition, individual variability in audiovisual recalibration is
discussed. It is suggested that listeners’ category sharpness may
be related to the weight they assign to visual information in
audiovisual speech perception. Specifically, listeners with less
sharp categories assign more weight to visual information
during audiovisual speech recognition. -
Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.
Abstract
An important part of understanding speech motor control consists of capturing the
interaction between speech production and speech perception. This study tests a
prediction of theoretical frameworks that have tried to account for these interactions: if
speech production targets are specified in auditory terms, individuals with better
auditory acuity should have more precise speech targets, evidenced by decreased
within-phoneme variability and increased between-phoneme distance. A study was
carried out consisting of perception and production tasks in counterbalanced order.
Auditory acuity was assessed using an adaptive speech discrimination task, while
production variability was determined using a pseudo-word reading task. Analyses of
the production data were carried out to quantify average within-phoneme variability as
well as average between-phoneme contrasts. Results show that individuals not only
vary in their production and perceptual abilities, but that better discriminators have
more distinctive vowel production targets (that is, targets with less within-phoneme
variability and greater between-phoneme distances), confirming the initial hypothesis.
This association between speech production and perception did not depend on local
phoneme density in vowel space. This study suggests that better auditory acuity leads
to more precise speech production targets, which may be a consequence of auditory
feedback affecting speech production over time. -
Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2017). Transfer from implicit to explicit phonological abilities in first and second language learners. Bilingualism: Language and Cognition, 20(4), 795-812. doi:10.1017/S1366728916000523.
Abstract
Children's abilities to process the phonological structure of words are important predictors of their literacy development. In the current study, we examined the interrelatedness between implicit (i.e., speech decoding) and explicit (i.e., phonological awareness) phonological abilities, and especially the role therein of lexical specificity (i.e., the ability to learn to recognize spoken words based on only minimal acoustic-phonetic differences). We tested 75 Dutch monolingual and 64 Turkish–Dutch bilingual kindergartners. SEM analyses showed that speech decoding predicted lexical specificity, which in turn predicted rhyme awareness in the first language learners but phoneme awareness in the second language learners. Moreover, in the latter group there was an impact of the second language: Dutch speech decoding and lexical specificity predicted Turkish phonological awareness, which in turn predicted Dutch phonological awareness. We conclude that language-specific phonological characteristics underlie different patterns of transfer from implicit to explicit phonological abilities in first and second language learners. -
Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2017). Mapping the speech code: Cortical responses linking the perception and production of vowels. Frontiers in Human Neuroscience, 11: 161. doi:10.3389/fnhum.2017.00161.
Abstract
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation -
Schuerman, W. L., Nagarajan, S., McQueen, J. M., & Houde, J. (2017). Sensorimotor adaptation affects perceptual compensation for coarticulation. The Journal of the Acoustical Society of America, 141(4), 2693-2704. doi:10.1121/1.4979791.
Abstract
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception -
Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.
Abstract
When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems -
Van Goch, M. M., Verhoeven, L., & McQueen, J. M. (2017). Trainability in lexical specificity mediates between short-term memory and both vocabulary and rhyme awareness. Learning and Individual Differences, 57, 163-169. doi:10.1016/j.lindif.2017.05.008.
Abstract
A major goal in the early years of elementary school is learning to read, a process in which children show substantial individual differences. To shed light on the underlying processes of early literacy, this study investigates the interrelations among four known precursors to literacy: phonological short-term memory, vocabulary size, rhyme awareness, and trainability in the phonological specificity of lexical representations, by means of structural equation modelling, in a group of 101 4-year-old children. Trainability in lexical specificity was assessed by teaching children pairs of new phonologically-similar words. Standardized tests of receptive vocabulary, short-term memory, and rhyme awareness were used. The best-fitting model showed that trainability in lexical specificity partially mediated between short-term memory and both vocabulary size and rhyme awareness. These results demonstrate that individual differences in the ability to learn phonologically-similar new words are related to individual differences in vocabulary size and rhyme awareness. -
Viebahn, M., Ernestus, M., & McQueen, J. M. (2017). Speaking style influences the brain’s electrophysiological response to grammatical errors in speech comprehension. Journal of Cognitive Neuroscience, 29(7), 1132-1146. doi:10.1162/jocn_a_01095.
Abstract
This electrophysiological study asked whether the brain processes grammatical gender
violations in casual speech differently than in careful speech. Native speakers of Dutch were
presented with utterances that contained adjective-noun pairs in which the adjective was either
correctly inflected with a word-final schwa (e.g. een spannende roman “a suspenseful novel”) or
incorrectly uninflected without that schwa (een spannend roman). Consistent with previous
findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic
violations when the talker was speaking in a careful manner. When the talker was speaking in a
casual manner, this response was absent. A control condition showed electrophysiological responses
for carefully as well as casually produced utterances with semantic anomalies, showing that
listeners were able to understand the content of both types of utterance. The results suggest that
listeners take information about the speaking style of a talker into account when processing the
acoustic-phonetic information provided by the speech signal. Absent schwas in casual speech are
effectively not grammatical gender violations. These changes in syntactic processing are evidence
of contextually-driven neural flexibility.Files private
Request files -
Adam, R., Orfanidou, E., McQueen, J. M., & Morgan, G. (2011). Sign language comprehension: Insights from misperceptions of different phonological parameters. In R. Channon, & H. Van der Hulst (
Eds. ), Formational units in sign languages (pp. 87-106). Berlin: Mouton de Gruyter and Ishara Press. -
Cho, T., & McQueen, J. M. (2011). Perceptual recovery from consonant-cluster simplification using language-specific phonological knowledge. Journal of Psycholinguistic Research, 40, 253-274. doi:10.1007/s10936-011-9168-0.
Abstract
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete. -
Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.
Abstract
We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge -
Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.
Abstract
Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search. -
Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.
Abstract
Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words. -
Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.
Abstract
Eye movements made by listeners during language-mediated visual search reveal a strong link between
visual processing and conceptual processing. For example, upon hearing the word for a missing referent
with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
common exhibit language-mediated eye movements like those made by older children and adults?
That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
between named and seen objects. This indicates that language-mediated visual search need not
depend on stored labels for concepts. -
Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (
Eds. ), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.Abstract
Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue. Language and Speech, 54(2), 147-165. doi:10.1177/0023830910397489.
Abstract
Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment 1a) or not (Experiment 1b). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than non-initial syllables. Speaking rate affects perception of suprasegmental information. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37, 978-996. doi:10.1037/a0021923.
Abstract
A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' “once (s)pear,” [t] in 'nooit (t)rap,' “never staircase/quick”) were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing. -
Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).
Abstract
The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training. -
Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).
Abstract
Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT. -
Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.
Abstract
Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech. -
Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.
Abstract
This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream. -
Sulpizio, S., & McQueen, J. M. (2011). When two newly-acquired words are one: New words differing in stress alone are not automatically represented differently. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 1385-1388).
Abstract
Do listeners use lexical stress at an early stage in word learning? Artificial-lexicon studies have shown that listeners can learn new spoken words easily. These studies used non-words differing in consonants and/or vowels, but not differing only in stress. If listeners use stress information in word learning, they should be able to learn new words that differ only in stress (e.g., BInulo-biNUlo). We investigated this issue here. When learning new words, Italian listeners relied on segmental information; they did not take stress information into account. Newly-acquired words differing in stress alone are not automatically represented as different words. -
Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2443.
Abstract
Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech. -
Witteman, M. J., Weber, A., & McQueen, J. M. (2011). On the relationship between perceived accentedness, acoustic similarity, and processing difficulty in foreign-accented speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2229-2232).
Abstract
Foreign-accented speech is often perceived as more difficult to understand than native speech. What causes this potential difficulty, however, remains unknown. In the present study, we compared acoustic similarity and accent ratings of American-accented Dutch with a cross-modal priming task designed to measure online speech processing. We focused on two Dutch diphthongs: ui and ij. Though both diphthongs deviated from standard Dutch to varying degrees and perceptually varied in accent strength, native Dutch listeners recognized words containing the diphthongs easily. Thus, not all foreign-accented speech hinders comprehension, and acoustic similarity and perceived accentedness are not always predictive of processing difficulties. -
Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (
Eds. ), Morphological structure in language processing (pp. 355-390). Berlin: Mouton de Gruyter. -
Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (
Eds. ), Morphological Structure in Language Processing (pp. 355-390). Berlin, Germany: Mouton De Gruyter.Files private
Request files -
McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.
Abstract
Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning. -
McQueen, J. M., & Cho, T. (2003). The use of domain-initial strengthening in segmentation of continuous English speech. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2993-2996). Adelaide: Causal Productions.
-
McQueen, J. M., Dahan, D., & Cutler, A. (2003). Continuity and gradedness in speech processing. In N. O. Schiller, & A. S. Meyer (
Eds. ), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 39-78). Berlin: Mouton de Gruyter. -
McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.
Abstract
Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard -
Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.
Abstract
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect). -
Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.
Abstract
Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process. -
Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.
Abstract
We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time. -
Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.
Abstract
We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.Additional information
https://www.mpi.nl/world/dcsp/diphones/index.html -
Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
Share this page