James McQueen

Presentations

Displaying 1 - 41 of 41
  • Uluşahin, O., Bosker, H. R., Meyer, A. S., & McQueen, J. M. (2024). Existing talker information may hinder convergence in synchronous speech. Talk presented at Psycholinguistics in Flanders (PiF 2024). Brussels, Belgium. 2024-05-27 - 2024-05-28.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2023). Individual differences in lexical stress in Dutch: An examination of cue weighting in production. Talk presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023). Nijmegen, The Netherlands. 2023-06-02 - 2023-06-04.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 44th Annual Meeting of the Cognitive Science Society (CogSci 2022). Toronto, Canada. 2022-07-27 - 2022-07-30.
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2022). The principal dimensions of speaking and listening skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2022). Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity. Talk presented at Speech Prosody 2022. Lisbon, Portugal. 2022-05-23 - 2022-05-26.
  • Takashima, A., Hintz, F., McQueen, J. M., Meyer, A. S., & Hagoort, P. (2022). The neuronal underpinnings of variability in language skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2022). Both contextual and talker-bound F0 information affect voiceless fricative perception. Talk presented at De Dag van de Fonetiek. Utrecht, The Netherlands. 2022-12-16.
  • Bujok, R., Bultena, S., McQueen, J. M., & Broersma, M. (2021). Accent adaptation through error-based learning. Talk presented at EDLL 2021 - International Conference on Error-Driven Learning in Language. Tübingen, Germany. 2021-03-10 - 2021-03-12.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). Effects of masking position on the time course of spoken word comprehension in noise. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). Vienna, Austria. 2021-07-26 - 2021-07-29.
  • Hintz, F., Voeten, C. C., Isakoglou, C., McQueen, J. M., & Meyer, A. S. (2021). Individual differences in language ability: Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 34th Annual CUNY Conference on Human Sentence Processing (CUNY 2021). Philadelphia, USA. 2021-03-04 - 2021-03-06.
  • Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2019). Assessing individual differences in language processing: A novel research tool. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.

    Abstract

    Individual differences in language processing are prevalent in our daily lives. However, for decades, psycholinguistic research has largely ignored variation in the normal range of abilities. Recently, scientists have begun to acknowledge the importance of inter-individual variability for a comprehensive characterization of the language system. In spite of this change of attitude, empirical research on individual differences is still sparse, which is in part due to the lack of a suitable research tool. Here, we present a novel battery of behavioral tests for assessing individual differences in language skills in younger adults. The Dutch prototype comprises 29 subtests and assesses many aspects of language knowledge (grammar and vocabulary), linguistic processing skills (word and sentence level) and general cognitive abilities involved in using language (e.g., WM, IQ). Using the battery, researchers can determine performance profiles for individuals and link them to neurobiological or genetic data.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). New in, old out? Does learning a new foreign language make you forget previously learned foreign languages?. Talk presented at the third Vocab@ conference. Leuven, Belgium. 2019-07-01 - 2019-07-03.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. Auditory feedback processing has been studied using perturbed auditory feedback. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. For example, when speakers hear themselves at a higher pitch than intended, they would compensate by lowering their pitch. However, sometimes speakers follow the perturbation instead (i.e., raising their pitch in response to higher-than-expected pitch). Although most past studies observe some following responses, current theoretical frameworks cannot account for following responses. In addition, recent experimental work has suggested that following responses may be more common than has been assumed to date.
    In the current study, we performed two experiments (N = 39 and N = 24) to investigate whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. Participants vocalized while they tried to match a target pitch level. Meanwhile, the pitch in their auditory feedback was briefly (500 ms) perturbed in half of the vocalizations, increasing or decreasing pitch by 25 cents. None of the participants were aware of these manipulations. Subsequently, we analyzed the pitch contour of the participants’ vocalizations.
    The results suggest that whether a perturbation-related response is opposing or following unexpected feedback depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. In addition, the results show that all speakers show both following and opposing responses, although the distribution of response types varies across individuals.
    Both the interaction with ongoing fluctuations of the speech system and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation. More generally, the current study indicates that looking beyond the average response can lead to a more complete view on the nature of feedback processing in motor control. Future work should explore whether the direction of feedback-based control in domains outside of speech production will also be conditional on the state of the motor system at the time of the perturbation.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.

    Abstract

    One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speech sound categories and lexical items. A major issue with this mapping problem is the variability in the acoustic realizations of sound categories, both within and across speakers. Past research has suggested listeners may use various sources of information, such as lexical knowledge or visual cues (e.g., lip-reading) to recalibrate these speech categories to the current speaker. Previous studies have focused on audiovisual recalibration of consonant categories. The present study explores whether vowel categorization, which is known to show less sharply defined category boundaries, also benefit from visual cues.
    Participants were exposed to videos of a speaker pronouncing one out of two vowels (Dutch vowels /e/ and /ø/), paired with audio that was ambiguous between the two vowels. The most ambiguous vowel token was determined on an individual basis by a categorization task at the beginning of the experiment. In one group of participants, this auditory token was paired with a video of an /e/ articulation, in the other group with an /ø/ video. After exposure to these videos, it was found in an audio-only categorization task that participants had adapted their categorization behavior as a function of the video exposure. The group that was exposed to /e/ videos showed a reduction of /ø/ classifications, suggesting they had recalibrated their vowel categories based on the available visual information. These results show that listeners indeed use visual information to recalibrate vowel categories, which is in line with previous work on audiovisual recalibration in consonant categories, and lexically-guided recalibration in both vowels and consonants.
    In addition, a secondary aim of the current study was to explore individual variability in audiovisual recalibration. Phoneme categories vary not only in terms of boundary location, but also in terms of boundary sharpness, or how strictly categories are distinguished. The present study explores whether this sharpness is associated with the amount of audiovisual recalibration. The results tentatively support that a fuzzy boundary is associated with stronger recalibration, suggesting that listeners’ category sharpness may be related to the weight they assign to visual information in audiovisual speech perception. If listeners with fuzzy boundaries assign more weight to visual cues, given that vowel categories have less sharp boundaries than consonants, there ought to be audiovisual recalibration for vowels as well. This is exactly what was found in the current study.
  • Goriot, C., Van Hout, R., Broersma, M., Unsworth, S., & McQueen, J. M. (2017). Executive functioning in early bilinguals, second language learners and monolinguals: does language balance play a role?. Talk presented at the 5th Barcelona Summer School on Bilingualism and Multilingualism. Barcelona, Spain. 2017-09-12 - 2017-09-15.
  • Goriot, C., Broersma, M., Van Hout, R., McQueen, J. M., & Unsworth, S. (2017). Perception of English speech sounds among Dutch primary-school pupils: A comparison between early-English and control school pupils. Talk presented at the Conference on Multilingualism (COM 2017). Groningen, The Netherlands. 2017-11-06 - 2017-11-08.
  • Krutwig, J., Sadakata, M., Garcia-Cossio, E., Desain, P., & McQueen, J. M. (2017). Perception and production interactions in non-native speech category learning: Between neural and behavioural signatures. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Goriot, C., Broersma, M., Van Hout, R., McQueen, J. M., & Unsworth, S. (2017). De relatie tussen vroeg vreemdetalenonderwijs en de ontwikkeling van het fonologisch bewustzijn. Talk presented at the Grote Taaldag 2017. Utrecht, The Netherlands. 2017-02-04.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2016). L1-effecten in een Engelse woordenschattaak: De PPVT-4. Talk presented at the Grote Taaldag. Utrecht, The Netherlands. 2016-02-06.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). Is there an effect of early-English education on the development of pupils’ executive functions?. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-26 - 2016-05-27.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). The relationship between early-English education and executive functions: Balance is key. Talk presented at the Conference on Multilingualism (COM 2016). Ghent, Belgium. 2016-09-11 - 2016-09-13.
  • Goriot, C., Van Hout, R., Broersma, M., Unsworth, S., & McQueen, J. M. (2016). The influence of cognates on Dutch pupils’ English vocabulary scores in the Peabody Picture Vocabulary Test. Talk presented at EuroSLA 26. Jyväskylä, Finland. 2016-08-24 - 2016-08-27.
  • Hintz, F., McQueen, J. M., & Scharenborg, O. (2016). Effects of frequency and neighborhood density on spoken-word recognition in noise: Evidence from perceptual identification in Dutch. Talk presented at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2016),. Bilbao, ES. 2016-09-01 - 2016-09-03.
  • McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing speech production-perception interactions through individual differences. Talk presented at Psycholinguistics in Flanders. Marche-en-Famenne. 2015-05-21 - 2015-05-22.

    Abstract

    This study aims to test recent theoretical frameworks in speech motor control which claim that speech production targets are specified in auditory terms. According to such frameworks, people with better auditory acuity should have more precise speech targets. Participants performed speech perception and production tasks in a counterbalanced order. Speech perception acuity was assessed using an adaptive speech discrimination task, where participants discriminated between stimuli on a /ɪ/-/ɛ/ and a /ɑ/-/ɔ/ continuum. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording of the vowels /ɪ/, /ɛ/, /ɑ/ and /ɔ/ in 288 pseudowords (18 per vowel, each of which was repeated 4 times). We predicted that speech production variability would correlate inversely with discrimination performance. Results confirmed this prediction as better discriminators had more distinctive vowel production targets. In addition, participants with higher auditory acuity produced vowels with smaller within-phoneme variability but spaced farther apart in vowel space. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interactions.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Brain areas involved in acquisition and consolidation of novel words with/without concepts across different age groups. Talk presented at the 22nd Annual meeting Society for the Scientific Study of Reading. Hawaii. 2015-07-15 - 2015-07-18.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Consolidation of novel word representation in young adults and children. Talk presented at the Magic Moments Workshop. Nijmegen, the Netherlands. 2015-03-10.
  • Lai, V. T., Kim, A., & McQueen, J. M. (2013). Sentential context modulates early phases of visual word recognition: Evidence from a training manipulation. Talk presented at the 26th Annual CUNY Conference on Human Sentence Processing [CUNY 2013]. Columbia, SC. 2013-03-21 - 2013-03-23.

    Abstract

    How does sentential context influence visual word recognition? Recent neural models suggest that single words are recognized via a hierarchy of local combination detectors [1]. Low-level features are extracted first by neurons in V1 in the visual cortex, features are then combined and fed into the higher level of letter
    fragments in V2, and then letter shapes in V4, and so on. A recent EEG study examining word recognition in context has shown that contextually-driven anticipation can influence this hierarchy of visual word recognition early on [2]. Specifically, a minor mismatch between the predicted visual word form and the actual input (cake
    vs. ceke) can elicit brain responses ~130 ms after word onset [2].
  • Poellmann, K., McQueen, J. M., Baayen, R. H., & Mitterer, H. (2013). Adaptation to reductions: Challenges of regional variation. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2013). Syntactic predictability facilitates the recognition of words in connected speech. Talk presented at the 18th Meeting of the European Society for Cognitive Psychology (ESCOP). Budapest (Hungary). 2013-08-29 - 2013-09-01.
  • Bakker, I., Takashima, A., van Hell, J., Janzen, G., & McQueen, J. M. (2012). Cross-modal effects on novel word consolidation. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In line with two-stage models of memory, it has been proposed that memory traces for newly learned words are
    initially dependent on medial temporal structures and acquire neocortical, more lexical representations during the first
    night’s sleep after training (Davis & Gaskell, 2009). Only after sleep-dependent consolidation are novel words fully
    integrated into the lexicon and are therefore able to enter into lexical competition with phonologically overlapping
    existing words. This effect, observable as a slowing down of responses to existing words with a novel competitor, has
    been demonstrated using various tasks including lexical decision, pause detection, semantic judgement, and wordspotting.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms. Talk presented at the 164th Meeting of the Acoustical Society of America. Kansas City, Missouri. 2012-10-22 - 2012-10-26.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Behavioral and Electrophysiological evidence for early vowel normalization. Talk presented at the 13th NVP Winter Conference on Cognition, Brain, and Behaviour (Dutch Psychonomic Society). Egmond aan Zee, the Netherlands. 2012-12-16 - 2012-12-17.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in spontaneous speech. Talk presented at The 11th edition of the Psycholinguistics in Flanders conference (PiF). Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. Talk presented at 12th Conference on Laboratory Phonology. University of New Mexico in Albuquerque, NM. 2010-07-08 - 2010-07-10.
  • Mitterer, H., McQueen, J. M., Bosker, H. R., & Poellmann, K. (2010). Adapting to phonological reduction: Tracking how learning from talker-specific episodes helps listeners recognize reductions. Talk presented at the 5th annual meeting of the Schwerpunktprogramm (SPP) 1234/2: Phonological and phonetic competence: between grammar, signal processing, and neural activity. München, Germany.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate context affects online word segmentation: Evidence from eye-tracking. Talk presented at "Speech perception and production in the brain" Summer Workshop of the Dutch Phonetic Society (NVFW). Leiden, the Netherlands. 2009-06-05.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Speaking rate affects the perception of word boundaries in online speech perception. Talk presented at 14th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2008). Cambridge, UK. 2008-09-04 - 2008-09-06.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2007). Lexical-stress information rapidly modulates spoken-word recognition. Talk presented at Dag van de Fonetiek. Utrecht, The Netherlands. 2007-12-20 - 2007-12-20.

Share this page