Displaying 1 - 17 of 17
-
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing speech production-perception interactions through individual differences. Talk presented at Psycholinguistics in Flanders. Marche-en-Famenne. 2015-05-21 - 2015-05-22.
Abstract
This study aims to test recent theoretical frameworks in speech motor control which claim that speech production targets are specified in auditory terms. According to such frameworks, people with better auditory acuity should have more precise speech targets. Participants performed speech perception and production tasks in a counterbalanced order. Speech perception acuity was assessed using an adaptive speech discrimination task, where participants discriminated between stimuli on a /ɪ/-/ɛ/ and a /ɑ/-/ɔ/ continuum. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording of the vowels /ɪ/, /ɛ/, /ɑ/ and /ɔ/ in 288 pseudowords (18 per vowel, each of which was repeated 4 times). We predicted that speech production variability would correlate inversely with discrimination performance. Results confirmed this prediction as better discriminators had more distinctive vowel production targets. In addition, participants with higher auditory acuity produced vowels with smaller within-phoneme variability but spaced farther apart in vowel space. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interactions. -
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. Poster presented at International Congress of Phonetic Sciences, Glasgow, UK.
Abstract
This study aims to test a prediction of recent
theoretical frameworks in speech motor control: if
speech production targets are specified in auditory
terms, people with better auditory acuity should
have more precise speech targets.
To investigate this, we had participants perform
speech perception and production tasks in a
counterbalanced order. To assess speech perception
acuity, we used an adaptive speech discrimination
task. To assess variability in speech production,
participants performed a pseudo-word reading task;
formant values were measured for each recording.
We predicted that speech production variability to
correlate inversely with discrimination performance.
The results suggest that people do vary in their
production and perceptual abilities, and that better
discriminators have more distinctive vowel
production targets, confirming our prediction. This
study highlights the importance of individual
differences in the study of speech motor control, and
sheds light on speech production-perception
interaction. -
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Effects of auditory feedback consistency on vowel production. Poster presented at Psycholinguistics in Flanders, Marche-en-Famenne.
Abstract
In investigations of feedback control during speech production, researchers have focused on two different kinds of responses to erroneous or unexpected auditory feedback. Compensation refers to online, feedback-based corrections of articulations. In contrast, adaptation refers to long-term changes in the speech production system after exposure to erroneous/unexpected feedback, which may last even after feedback is normal again. In the current study, we aimed to compare both types of feedback responses by investigating the conditions under which the system starts adapting in addition to merely compensating. Participants vocalized long vowels while they were exposed to either consistently altered auditory feedback, or to feedback that was unpredictably either altered or normal. Participants were not aware of the manipulation of auditory feedback. We predicted that both conditions would elicit compensation, whereas adaptation would be stronger when the altered feedback was consistent across trials. The results show that although there seems to be somewhat more adaptation for the consistently altered feedback condition, a substantial amount of individual variability led to statistically unreliable effects at the group level. The results stress the importance of taking into account individual differences and show that people vary widely in how they respond to altered auditory feedback.Additional information
http://figshare.com/articles/Effects_of_auditory_feedback_consistency_on_vowel_… -
Franken, M. K., Eisner, F., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Following and Opposing Responses to Perturbed Auditory Feedback. Poster presented at Society for the Neurobiology of Language Annual Meeting 2015, Chicago, IL.
-
Goriot, C., Broersma, M., Unsworth, S., Van Hout, R., & McQueen, J. M. (2015). Does early foreign language education influences pupils' cognitive development?. Poster presented at the LOT summer school 2015, Leuven, Belgium.
-
Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Brain areas involved in acquisition and consolidation of novel words with/without concepts across different age groups. Talk presented at the 22nd Annual meeting Society for the Scientific Study of Reading. Hawaii. 2015-07-15 - 2015-07-18.
-
Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Consolidation of novel word representation in young adults and children. Talk presented at the Magic Moments Workshop. Nijmegen, the Netherlands. 2015-03-10.
-
Viebahn, M., Buerki, A., McQueen, J. M., Ernestus, M., & Frauenfelder, U. (2015). Learning multiple pronunciation variants of French novel words with orthographic forms. Poster presented at Memory consolidation and word learning workshop, Nijmegen.
-
Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., & Vidnyánszky, Z. (2009). Neural correlates of voice category learning - An audiovisual fMRI study. Poster presented at 12th Meeting of the Hungarian Neuroscience Society, Budapest.
Abstract
Voices in the auditory modality, like faces in the visual modality, are the keys to person recognition. This fMRI experiment investigated the neural organisation of voice categories using a voice-training paradigm. Voice-morph continua were created between two female Hungarian speakers' voices saying six monosyllabic Hungarian words, one continuum per word. Listeners were trained to categorize the middle part of the continua as one voice. This trained voice category was associated with a face. Twenty-five listeners were tested twice with a one-week delay. To induce shifts in the trained category, listeners received feedback on their judgments such that the trained category was associated with different voice-morph intervals each week, allowing within-subject manipulation of whether stimuli corresponded to a trained voice-category centre, to a category boundary or to another voice. FMRI tests each week were preceded by eighty minutes training distributed over two consecutive days. The tests included implicit and explicit categorization tasks. Voice and face selective areas were defined in separate localizer runs. Group-averaged local maxima from these runs were used for small-volume correction analyses. During implicit categorization, stimuli corresponding to trained voice-category centres elicited lower activity than other stimuli in voice-selective regions of the right STS. During explicit categorization, stimuli corresponding to trained voice-category boundaries elicited higher activity than other stimuli in voice-selective regions of the right VLPFC. Furthermore, the unimodal presentation of voices that are more associated with a face may elicit higher activity in visual areas. These results map out the way voice categories are neurally represented. -
Di Betta, A. M., Weber, A., & McQueen, J. M. (2009). Trick or treat? Adaptation to Italian-accented English speech by native English, Italian, and Dutch listeners. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona.
Abstract
English is spoken worldwide by both native (L1) and nonnative (L2) speakers. It is therefore imperative to establish how easily L1 and L2 speakers understand each other. We know that L1 listeners adapt to foreign-accented speech very rapidly (Clarke & Garrett, 2004), and L2 listeners find L2 speakers (from matched and mismatched L1 backgrounds) as intelligible as native speakers (Bent & Bradlow, 2003). But foreign-accented speech can deviate widely from L1 pronunciation norms, for example when adult L2 learners experience difficulties in producing L2 phonemes that are not part of their native repertoire (Strange, 1995). For instance, Italian L2 learners of English often lengthen the lax English vowel /I/, making it sound more like the tense vowel /i/ (Flege et al., 1999). This blurs the distinction between words such as bin and bean. Unless listeners are able to adapt to this kind of pronunciation variance, it would hinder word recognition by both L1 and L2 listeners (e.g., /bin/ could mean either bin or bean). In this study we investigate whether Italian-accented English interferes with on-line word recognition for native English listeners and for nonnative English listeners, both those where the L1 matches the speaker accent (i.e., Italian listeners) and those with an L1 mismatch (i.e., Dutch listeners). Second, we test whether there is perceptual adaptation to the Italian-accented speech during the experiment in each of the three listener groups. Participants in all groups took part in the same cross-modal priming experiment. They heard spoken primes and made lexical decisions to printed targets, presented at the acoustic offset of the prime. The primes, spoken by a native Italian, consisted of 80 English words, half with /I/ in their standard pronunciation but mispronounced with an /i/ (e.g., trick spoken as treek), and half with /i/ in their standard pronunciation and pronounced correctly (e.g., treat). These words also appeared as targets, following either a related prime (which was either identical, e.g., treat-treat, or mispronounced, e.g., treek-trick) or an unrelated prime. All three listener groups showed identity priming (i.e., faster decisions to treat after hearing treat than after an unrelated prime), both overall and in each of the two halves of the experiment. In addition, the Italian listeners showed mispronunciation priming (i.e., faster decisions to trick after hearing treek than after an unrelated prime) in both halves of the experiment, while the English and Dutch listeners showed mispronunciation priming only in the second half of the experiment. These results suggest that Italian listeners, prior to the experiment, have learned to deal with Italian-accented English, and that English and Dutch listeners, during the experiment, can rapidly adapt to Italian-accented English. For listeners already familiar with a particular accent (e.g., through their own pronunciation), it appears that they have already learned how to interpret words with mispronounced vowels. Listeners who are less familiar with a foreign accent can quickly adapt to the way a particular speaker with that accent talks, even if that speaker is not talking in the listeners’ native language. -
Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.
Abstract
Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate context affects online word segmentation: Evidence from eye-tracking. Talk presented at "Speech perception and production in the brain" Summer Workshop of the Dutch Phonetic Society (NVFW). Leiden, the Netherlands. 2009-06-05.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates lexical competition in online speech perception. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.
-
Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates the perception of durational cues to lexical stress. Poster presented at 50th Annual Meeting of the Psychonomic Society, Boston, Mass.
Additional information
ASA09_Reinisch.pdf -
Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2009). At which processing level does extrinsic speaker information influence vowel perception?. Poster presented at 158th Meeting of the Acoustical Society of America, San Antonio, Texas.
Abstract
The interpretation of vowel sounds depends on perceived characteristics of the speaker (e.g., average first formant (F1) frequency). A vowel between /I/ and /E/ is more likely to be perceived as /I/ if a precursor sentence indicates that the speaker has a relatively high average F1. Behavioral and electrophysiological experiments investigating the locus of this extrinsic vowel normalization are reported. The normalization effect with a categorization task was first replicated. More vowels on an /I/-/E/ continuum followed by a /papu/ context were categorized as /I/ with a high-F1 context than with a low-F1 context. Two experiments then examined this context effect in a 4I-oddity discrimination task. Ambiguous vowels were more difficult to distinguish from the /I/-endpoint if the context /papu/ had a high F1 than if it had a low F1 (and vice versa for discrimination of ambiguous vowels from the /E/-endpoint). Furthermore, between-category discriminations were no easier than within-category discriminations. Together, these results suggest that the normalization mechanism operates largely at an auditory processing level. The MisMatch Negativity (an automatically evoked brain potential) arising from the same stimuli is being measured, to investigate whether extrinsic normalization takes place in the absence of an explicit decision task. -
Witteman, M. J., Weber, A., & McQueen, J. M. (2009). Recognizing German-accented Dutch: Does prior experience matter?. Poster presented at 12th NVP Winter Conference on Cognition, Brain, and Behaviour, Egmond aan Zee, Netherlands.
-
Witteman, M. J., Weber, A., & McQueen, J. M. (2010). The influence of short- and long-term experience on recognizing German-accented Dutch. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.
Share this page