Displaying 1 - 12 of 12
-
Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., & Vidnyánszky, Z. (2009). Neural correlates of voice category learning - An audiovisual fMRI study. Poster presented at 12th Meeting of the Hungarian Neuroscience Society, Budapest.
Abstract
Voices in the auditory modality, like faces in the visual modality, are the keys to person recognition. This fMRI experiment investigated the neural organisation of voice categories using a voice-training paradigm. Voice-morph continua were created between two female Hungarian speakers' voices saying six monosyllabic Hungarian words, one continuum per word. Listeners were trained to categorize the middle part of the continua as one voice. This trained voice category was associated with a face. Twenty-five listeners were tested twice with a one-week delay. To induce shifts in the trained category, listeners received feedback on their judgments such that the trained category was associated with different voice-morph intervals each week, allowing within-subject manipulation of whether stimuli corresponded to a trained voice-category centre, to a category boundary or to another voice. FMRI tests each week were preceded by eighty minutes training distributed over two consecutive days. The tests included implicit and explicit categorization tasks. Voice and face selective areas were defined in separate localizer runs. Group-averaged local maxima from these runs were used for small-volume correction analyses. During implicit categorization, stimuli corresponding to trained voice-category centres elicited lower activity than other stimuli in voice-selective regions of the right STS. During explicit categorization, stimuli corresponding to trained voice-category boundaries elicited higher activity than other stimuli in voice-selective regions of the right VLPFC. Furthermore, the unimodal presentation of voices that are more associated with a face may elicit higher activity in visual areas. These results map out the way voice categories are neurally represented. -
Di Betta, A. M., Weber, A., & McQueen, J. M. (2009). Trick or treat? Adaptation to Italian-accented English speech by native English, Italian, and Dutch listeners. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona.
Abstract
English is spoken worldwide by both native (L1) and nonnative (L2) speakers. It is therefore imperative to establish how easily L1 and L2 speakers understand each other. We know that L1 listeners adapt to foreign-accented speech very rapidly (Clarke & Garrett, 2004), and L2 listeners find L2 speakers (from matched and mismatched L1 backgrounds) as intelligible as native speakers (Bent & Bradlow, 2003). But foreign-accented speech can deviate widely from L1 pronunciation norms, for example when adult L2 learners experience difficulties in producing L2 phonemes that are not part of their native repertoire (Strange, 1995). For instance, Italian L2 learners of English often lengthen the lax English vowel /I/, making it sound more like the tense vowel /i/ (Flege et al., 1999). This blurs the distinction between words such as bin and bean. Unless listeners are able to adapt to this kind of pronunciation variance, it would hinder word recognition by both L1 and L2 listeners (e.g., /bin/ could mean either bin or bean). In this study we investigate whether Italian-accented English interferes with on-line word recognition for native English listeners and for nonnative English listeners, both those where the L1 matches the speaker accent (i.e., Italian listeners) and those with an L1 mismatch (i.e., Dutch listeners). Second, we test whether there is perceptual adaptation to the Italian-accented speech during the experiment in each of the three listener groups. Participants in all groups took part in the same cross-modal priming experiment. They heard spoken primes and made lexical decisions to printed targets, presented at the acoustic offset of the prime. The primes, spoken by a native Italian, consisted of 80 English words, half with /I/ in their standard pronunciation but mispronounced with an /i/ (e.g., trick spoken as treek), and half with /i/ in their standard pronunciation and pronounced correctly (e.g., treat). These words also appeared as targets, following either a related prime (which was either identical, e.g., treat-treat, or mispronounced, e.g., treek-trick) or an unrelated prime. All three listener groups showed identity priming (i.e., faster decisions to treat after hearing treat than after an unrelated prime), both overall and in each of the two halves of the experiment. In addition, the Italian listeners showed mispronunciation priming (i.e., faster decisions to trick after hearing treek than after an unrelated prime) in both halves of the experiment, while the English and Dutch listeners showed mispronunciation priming only in the second half of the experiment. These results suggest that Italian listeners, prior to the experiment, have learned to deal with Italian-accented English, and that English and Dutch listeners, during the experiment, can rapidly adapt to Italian-accented English. For listeners already familiar with a particular accent (e.g., through their own pronunciation), it appears that they have already learned how to interpret words with mispronounced vowels. Listeners who are less familiar with a foreign accent can quickly adapt to the way a particular speaker with that accent talks, even if that speaker is not talking in the listeners’ native language. -
Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.
Abstract
Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate context affects online word segmentation: Evidence from eye-tracking. Talk presented at "Speech perception and production in the brain" Summer Workshop of the Dutch Phonetic Society (NVFW). Leiden, the Netherlands. 2009-06-05.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates lexical competition in online speech perception. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates the perception of durational cues to lexical stress. Poster presented at 50th Annual Meeting of the Psychonomic Society, Boston, Mass.
Additional information
ASA09_Reinisch.pdf -
Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2009). At which processing level does extrinsic speaker information influence vowel perception?. Poster presented at 158th Meeting of the Acoustical Society of America, San Antonio, Texas.
Abstract
The interpretation of vowel sounds depends on perceived characteristics of the speaker (e.g., average first formant (F1) frequency). A vowel between /I/ and /E/ is more likely to be perceived as /I/ if a precursor sentence indicates that the speaker has a relatively high average F1. Behavioral and electrophysiological experiments investigating the locus of this extrinsic vowel normalization are reported. The normalization effect with a categorization task was first replicated. More vowels on an /I/-/E/ continuum followed by a /papu/ context were categorized as /I/ with a high-F1 context than with a low-F1 context. Two experiments then examined this context effect in a 4I-oddity discrimination task. Ambiguous vowels were more difficult to distinguish from the /I/-endpoint if the context /papu/ had a high F1 than if it had a low F1 (and vice versa for discrimination of ambiguous vowels from the /E/-endpoint). Furthermore, between-category discriminations were no easier than within-category discriminations. Together, these results suggest that the normalization mechanism operates largely at an auditory processing level. The MisMatch Negativity (an automatically evoked brain potential) arising from the same stimuli is being measured, to investigate whether extrinsic normalization takes place in the absence of an explicit decision task. -
Witteman, M. J., Weber, A., & McQueen, J. M. (2009). Recognizing German-accented Dutch: Does prior experience matter?. Poster presented at 12th NVP Winter Conference on Cognition, Brain, and Behaviour, Egmond aan Zee, Netherlands.
-
Witteman, M. J., Weber, A., & McQueen, J. M. (2010). The influence of short- and long-term experience on recognizing German-accented Dutch. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2007). Lexical-stress information rapidly modulates spoken-word recognition. Talk presented at Dag van de Fonetiek. Utrecht, The Netherlands. 2007-12-20 - 2007-12-20.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2007). Tracking over time how lexical stress information modulates spoken word recognition. Poster presented at 11th Winter Conference of the Dutch Psychonomic Society, Egmond aan Zee, The Netherlands.
-
Sjerps, M. J., & McQueen, J. M. (2007). Nonnative phonemes are open to native interpretation: A perceptual learning study. Poster presented at 154th Annual Meeting of the Acoustical Society of America, New Orleans, LA.
Abstract
Four experiments examined whether Dutch listeners can learn to interpret a nonnative phoneme (English [\phontheta]) as an instance of a native category (Dutch [f] or [s]). During exposure in Experiment 1, two listener groups made lexical decisions to words and nonwords. Listeners heard [\phontheta] replacing [f] in 20 [f]-final words (Group 1), or [s] in 20 [s]-final words (Group 2). At test, participants heard e.g. [do\phontheta], based on the minimal pair doof/doos (deaf/box), and made visual lexical decisions to e.g. doof or doos. Group 1 were faster on doof decisions after [do\phontheta] than after an unrelated prime; Group 2 were faster on doos decisions. The groups had thus learned that [\phontheta] was, respectively, [f] or [s]. This learning was thorough: effects were just as large when the exposure sound was an ambiguous [fs]-mixture (Experiment 2) and when the test primes contained unambiguous [f] or [s] (Experiment 3). In Experiment 4, signal-correlated noise was used as the exposure sound. Listeners learned that the noise was an [f], irrespective of [f]- or [s]-biased exposure, showing that learning is determined by the new sound’s spectral characteristics. Perceptual learning in a native language is thorough, and can override years of second-language phonetic learning.
Share this page