James McQueen

Presentations

Displaying 1 - 10 of 10
  • Bakker, I., Takashima, A., van Hell, J., Janzen, G., & McQueen, J. M. (2012). Cross-modal effects on novel word consolidation. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In line with two-stage models of memory, it has been proposed that memory traces for newly learned words are
    initially dependent on medial temporal structures and acquire neocortical, more lexical representations during the first
    night’s sleep after training (Davis & Gaskell, 2009). Only after sleep-dependent consolidation are novel words fully
    integrated into the lexicon and are therefore able to enter into lexical competition with phonologically overlapping
    existing words. This effect, observable as a slowing down of responses to existing words with a novel competitor, has
    been demonstrated using various tasks including lexical decision, pause detection, semantic judgement, and wordspotting.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms. Talk presented at the 164th Meeting of the Acoustical Society of America. Kansas City, Missouri. 2012-10-22 - 2012-10-26.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Behavioral and Electrophysiological evidence for early vowel normalization. Talk presented at the 13th NVP Winter Conference on Cognition, Brain, and Behaviour (Dutch Psychonomic Society). Egmond aan Zee, the Netherlands. 2012-12-16 - 2012-12-17.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2012). Neural networks involved in retrieval of newly learned words and effect of overnight consolidation - an fMRI study -. Poster presented at the 42nd annual meeting of the Society for Neuroscience (Neuroscience 2012), New Orleans, LA.

    Abstract

    Declarative memory appears to involve two separate systems, with more episodically oriented memories coded in a hippocampal network, and more non-episodic or semantic memories coded in a neocortical network. Previous works (e.g. Dumay & Gaskell, 2007) have shown a role of sleep in the lexicalization of novel words. In line with the two-stage model of memory proposed by McClelland and colleagues (1995), the memory traces for novel words are initially dependent on hippocampal structures. However, a shift towards neocortical representations occurs during the first night’s sleep after training. This shift, or integration of newly learned words into the lexicon (lexicalization) can be observed behaviourally as lexical competition, where novel words slow down recognition of phonologically overlapping known words. To extend understanding of how newly learned words are incorporated into the semantic system, we conducted an fMRI study to elucidate the neural processes underlying sleep-dependent lexicalization, with the additional aim of investigating multimodal information integration in word learning. As a first step towards studying the acquisition of multimodal word meanings, we familiarized subjects with the phonological form of 40 novel words, of which 20 were associated with pictures of novel objects (“picture-associated words”) and 20 were not (“form-only words”). Immediately after training (Day1) and on the following day (Day2), we recorded the BOLD response to auditorily presented “trained novel words”,” untrained novel words” and “existing words”, and administered a lexical competition task to test the effect of novel words on phonologically overlapping existing words. Behavioural data showed enhanced performance in recognition and recall of novel words after sleep, with a greater benefit for picture-associated words. However, lexical competition on Day2 was greater for the form-only words. The fMRI data showed more involvement of the hippocampal network for picture-associated words than for form-only words. In contrast, form-only words activated the semantic memory network already on Day1, whereas this was more apparent on Day2 for picture-associated words. This implies that the consolidation/lexicalization process differs depending on the degree of involvement of the two memory systems, with a greater involvement of the hippocampal system for picture-associated words. Stronger episodic memory traces might slow down the overnight shift of the novel picture-associated words to the lexical network relative to the faster integration into this network of the form-only words.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2012). Effects of repetition and temporal distance on vowel reduction in spontaneous speech. Poster presented at the 13th Conference on Laboratory Phonology (LabPhon 2012), Stuttgart, Germany.
  • Viebahn, M. C., Ernestus, M., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in natural speech. Poster presented at INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association, Portland, OR.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in spontaneous speech. Talk presented at The 11th edition of the Psycholinguistics in Flanders conference (PiF). Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Hanulikova, A., Davidson, D. J., McQueen, J. M., & Mitterer, H. (2008). Native and non-native segmentation of continuous speech. Poster presented at XXIX International Congress of Psychology [ICP 2008], Berlin.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Speaking rate affects the perception of word boundaries in online speech perception. Talk presented at 14th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2008). Cambridge, UK. 2008-09-04 - 2008-09-06.
  • Sjerps, M. J., & McQueen, J. M. (2008). The role of speech-specific signal characteristics in vowel normalization. Poster presented at 156th Annual Meeting of the Acoustical Society of America, Miami, FL.

    Abstract

    Listeners adjust their vowel perception to the characteristics of a particular speaker. Six experiments investigated whether speech-specific signal characteristics influence the occurrence and amount of such normalization. Previous findings were replicated with first formant (F1) manipulations of naturally recorded speech; target sounds on a [pIt] (low F1) to [pEt] (high F1) continuum were more often labeled as [pIt] after a precursor sentence with a high F1, and more often labeled as [pEt] after one with a low F1 (Exp. 1). Normalization was also observed, though to a lesser extent, when these materials were spectrally rotated, and hence sounded unlike speech (Exp. 2). No normalization occurred when, in addition to spectral rotation, the silent intervals and pitch-movement were removed and the syllables were temporally reversed (Exp. 3), despite spectral similarity of these precursors to those in Exp. 2. Reintroducing only pitch movement (Exp. 4), or silent intervals (Exp. 5), or spectrally-rotating the stimuli back (Exp. 6), did not result in normalization, so none of these factors alone accounts for the effect's disappearance in Exp. 3. These results show that normalization is not specific to speech, but still depends on more than the overall spectral properties of the preceding acoustic context.

Share this page