James McQueen

Publications

Displaying 1 - 27 of 27
  • Ekerdt, C., Menks, W. M., Fernández, G., McQueen, J. M., Takashima, A., & Janzen, G. (2024). White matter connectivity linked to novel word learning in children. Brain Structure & Function, 229, 2461-2477. doi:10.1007/s00429-024-02857-6.

    Abstract

    Children and adults are excellent word learners. Increasing evidence suggests that the neural mechanisms that allow us to learn words change with age. In a recent fMRI study from our group, several brain regions exhibited age-related differences when accessing newly learned words in a second language (L2; Takashima et al. Dev Cogn Neurosci 37, 2019). Namely, while the Teen group (aged 14–16 years) activated more left frontal and parietal regions, the Young group (aged 8–10 years) activated right frontal and parietal regions. In the current study we analyzed the structural connectivity data from the aforementioned study, examining the white matter connectivity of the regions that showed age-related functional activation differences. Age group differences in streamline density as well as correlations with L2 word learning success and their interaction were examined. The Teen group showed stronger connectivity than the Young group in the right arcuate fasciculus (AF). Furthermore, white matter connectivity and memory for L2 words across the two age groups correlated in the left AF and the right anterior thalamic radiation (ATR) such that higher connectivity in the left AF and lower connectivity in the right ATR was related to better memory for L2 words. Additionally, connectivity in the area of the right AF that exhibited age-related differences predicted word learning success. The finding that across the two age groups, stronger connectivity is related to better memory for words lends further support to the hypothesis that the prolonged maturation of the prefrontal cortex, here in the form of structural connectivity, plays an important role in the development of memory.

    Additional information

    supplementary information
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.

    Abstract

    Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

    Additional information

    network analysis of dataset A and B
  • Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.

    Abstract

    We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
    provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
    description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl.
  • Koning, M. E. E., Wyman, N. K., Menks, W. M., Ekerdt, C., Fernández, G., Kidd, E., Lemhöfer, K., McQueen, J. M., & Janzen, G. (2024). The relationship between brain structure and function during novel grammar learning across development. Cerebral Cortex, 34(12): bhae488. doi:10.1093/cercor/bhae488.

    Abstract

    In this study, we explored the relationship between developmental differences in gray matter structure and grammar learning ability in 159 Dutch-speaking individuals (8 to 25 yr). The data were collected as part of a recent large-scale functional MRI study (Menks WM, Ekerdt C, Lemhöfer K, Kidd E, Fernández G, McQueen JM, Janzen G. Developmental changes in brain activation during novel grammar learning in 8–25-year-olds. Dev Cogn Neurosci. 2024;66:101347. https://doi.org/10.1016/j.dcn.2024.101347) in which participants implicitly learned Icelandic morphosyntactic rules and performed a grammaticality judgment task in the scanner. Behaviorally, Menks et al. (2024) showed that grammaticality judgment task performance increased steadily from 8 to 15.4 yr, after which age had no further effect. We show in the current study that this age-related grammaticality judgment task performance was negatively related to cortical gray matter volume and cortical thickness in many clusters throughout the brain. Hippocampal volume was positively related to age-related grammaticality judgment task performance and L2 (English) vocabulary knowledge. Furthermore, we found that grammaticality judgment task performance, L2 grammar proficiency, and L2 vocabulary knowledge were positively related to gray matter maturation within parietal regions, overlapping with the functional MRI clusters that were reported previously in Menks et al. (2024) and which showed increased brain activation in relation to grammar learning. We propose that this overlap in functional and structural results indicates that brain maturation in parietal regions plays an important role in second language learning.

    Additional information

    supplements
  • Menks, W. M., Ekerdt, C., Lemhöfer, K., Kidd, E., Fernández, G., McQueen, J. M., & Janzen, G. (2024). Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Developmental Cognitive Neuroscience, 66: 101347. doi:10.1016/j.dcn.2024.101347.

    Abstract

    While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.

    Additional information

    supplement
  • Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.

    Abstract

    Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.

    Additional information

    supplementary material
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.

    Abstract

    Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Wagner, M. A., Broersma, M., McQueen, J. M., Van Hout, R., & Lemhöfer, K. (2024). The case for a quantitative approach to the study of nonnative accent features. Language and Speech. Advance online publication. doi:10.1177/00238309241256653.

    Abstract

    Research with nonnative speech spans many different linguistic branches and topics. Most studies include one or a few well-known features of a particular accent. However, due to a lack of empirical studies, little is known about how common these features are among nonnative speakers or how uncommon they are among native speakers. Moreover, it remains to be seen whether findings from such studies generalize to lesser-known features. Here, we demonstrate a quantitative approach to study nonnative accent features using Dutch-accented English as an example. By analyzing the phonetic distances between transcriptions of speech samples, this approach can identify the features that best distinguish nonnative from native speech. In addition, we describe a method to test hypotheses about accent features by checking whether the prevalence of the features overall varies between native and nonnative speakers. Furthermore, we include English speakers from the United States and United Kingdom and native Dutch speakers from Belgium and The Netherlands to address the issue of regional accent variability in both the native and target language. We discuss the results concerning three observed features. Overall, the results provide empirical support for some well-known features of Dutch-accented English, but suggest that others may be infrequent among nonnatives or in fact frequent among natives. In addition, the findings reveal potentially new accent features, and factors that may modulate the expression of known features. Our study demonstrates a fruitful approach to study nonnative accent features that has the potential to expand our understanding of the phenomenon of accent.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2019). Consistency influences altered auditory feedback processing. Quarterly Journal of Experimental Psychology, 72(10), 2371-2379. doi:10.1177/1747021819838939.

    Abstract

    Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
  • Grey, S., Schubel, L. C., McQueen, J. M., & Van Hell, J. G. (2019). Processing foreign-accented speech in a second language: Evidence from ERPs during sentence comprehension in bilinguals. Bilingualism: Language and Cognition, 22(5), 912-929. doi:10.1017/S1366728918000937.

    Abstract

    This study examined electrophysiological correlates of sentence comprehension of native-accented and foreign-accented
    speech in a second language (L2), for sentences produced in a foreign accent different from that associated with the listeners’
    L1. Bilingual speaker-listeners process different accents in their L2 conversations, but the effects on real-time L2 sentence
    comprehension are unknown. Dutch–English bilinguals listened to native American-English accented sentences and foreign
    (and for them unfamiliarly-accented) Chinese-English accented sentences while EEG was recorded. Behavioral sentence
    comprehension was highly accurate for both native-accented and foreign-accented sentences. ERPs showed different patterns
    for L2 grammar and semantic processing of native- and foreign-accented speech. For grammar, only native-accented speech
    elicited an Nref. For semantics, both native- and foreign-accented speech elicited an N400 effect, but with a delayed onset
    across both accent conditions. These findings suggest that the way listeners comprehend native- and foreign-accented
    sentences in their L2 depends on their familiarity with the accent.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). Bridging the gap between second language acquisition research and memory science: The case of foreign language attrition. Frontiers in Human Neuroscience, 13: 397. doi:10.3389/fnhum.2019.00397.

    Abstract

    The field of second language acquisition (SLA) is by nature of its subject a highly interdisciplinary area of research. Learning a (foreign) language, for example, involves encoding new words, consolidating and committing them to long-term memory, and later retrieving them. All of these processes have direct parallels in the domain of human memory and have been thoroughly studied by researchers in that field. Yet, despite these clear links, the two fields have largely developed in parallel and in isolation from one another. The present paper aims to promote more cross-talk between SLA and memory science. We focus on foreign language (FL) attrition as an example of a research topic in SLA where the parallels with memory science are especially apparent. We discuss evidence that suggests that competition between languages is one of the mechanisms of FL attrition, paralleling the interference process thought to underlie forgetting in other domains of human memory. Backed up by concrete suggestions, we advocate the use of paradigms from the memory literature to study these interference effects in the language domain. In doing so, we hope to facilitate future cross-talk between the two fields, and to further our understanding of FL attrition as a memory phenomenon.
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference
  • Solberg Økland, H., Todorović, A., Lüttke, C. S., McQueen, J. M., & De Lange, F. P. (2019). Combined predictive effects of sentential and visual constraints in early audiovisual speech processing. Scientific Reports, 9: 7870. doi:10.1038/s41598-019-44311-2.

    Abstract

    In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Van Goch, M. M., Verhoeven, L., & McQueen, J. M. (2019). Success in learning similar-sounding words predicts vocabulary depth above and beyond vocabulary breadth. Journal of Child Language, 46(1), 184-197. doi:10.1017/S0305000918000338.

    Abstract

    In lexical development, the specificity of phonological representations is important. The ability to build phonologically specific lexical representations predicts the number of words a child knows (vocabulary breadth), but it is not clear if it also fosters how well words are known (vocabulary depth). Sixty-six children were studied in kindergarten (age 5;7) and first grade (age 6;8). The predictive value of the ability to learn phonologically similar new words, phoneme discrimination ability, and phonological awareness on vocabulary breadth and depth were assessed using hierarchical regression. Word learning explained unique variance in kindergarten and first-grade vocabulary depth, over the other phonological factors. It did not explain unique variance in vocabulary breadth. Furthermore, even after controlling for kindergarten vocabulary breadth, kindergarten word learning still explained unique variance in first-grade vocabulary depth. Skill in learning phonologically similar words appears to predict knowledge children have about what words mean.
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2001). The roll of the silly ball. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honor of Jacques Mehler (pp. 181-194). Cambridge, MA: MIT Press.
  • McQueen, J. M., Norris, D., & Cutler, A. (2001). Can lexical knowledge modulate prelexical representations over time? In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The results of a study on perceptual learning are reported. Dutch subjects made lexical decisions on a list of words and nonwords. Embedded in the list were either [f]- or [s]-final words in which the final fricative had been replaced by an ambiguous sound, midway between [f] and [s]. One group of listeners heard ambiguous [f]- final Dutch words like [kara?] (based on karaf, carafe) and unambiguous [s]-final words (e.g., karkas, carcase). A second group heard the reverse (e.g., ambiguous [karka?] and unambiguous karaf). After this training phase, listeners labelled ambiguous fricatives on an [f]- [s] continuum. The subjects who had heard [?] in [f]- final words categorised these fricatives as [f] reliably more often than those who had heard [?] in [s]-final words. These results suggest that speech recognition is dynamic: the system adjusts to the constraints of each particular listening situation. The lexicon can provide this adjustment process with a training signal.
  • McQueen, J. M., & Cutler, A. (Eds.). (2001). Spoken word access processes. Hove, UK: Psychology Press.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Norris, D., McQueen, J. M., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal constraints on speech segmentation. Language and Cognitive Processes, 16, 637-660. doi:10.1080/01690960143000119.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and any likely location of a word boundary, as cued in the speech signal. The experiments examined cases where the residue was either a CVC syllable with a schwa, or a CV syllable with a lax vowel. Although neither of these syllable contexts is a possible lexical word in English, word-spotting in both contexts was easier than in a context consisting of a single consonant. Two control lexical-decision experiments showed that the word-spotting results reflected the relative segmentation difficulty of the words in different contexts. The PWC appears to be language-universal rather than language-specific.
  • Van Alphen, P. M., & McQueen, J. M. (2001). The time-limited influence of sentential context on function word identification. Journal of Experimental Psychology: Human Perception and Performance, 27, 1057-1071. doi:10.1037/0096-1523.27.5.1057.

    Abstract

    Sentential context effects on the identification of the Dutch function words te (to) and de (the) were examined. In Experiment 1, listeners labeled words on a [tә]-[dә] continuum more often as te when the context was te biased (Ik probeer [?ә] schieten [I try to/the shoot]) than when it was de biased (Ik probeer [?ә] schoenen [I try to/the shoes]). The effect was weaker in slower responses. In Experiment 2, disambiguation began later, in the second word after [?ә]. There was a weak context effect only in the slower responses. In Experiments 3 and 4, disambiguation occurred on the word before [?ә]: There was no context effect when one set of sentences was used, but there was an effect (larger in the faster responses) when more sentences were used. Syntactic processing affects word identification only within a limited time frame. It appears to do so not by influencing lexical access processes through feedback but, instead, by biasing decision making.

Share this page