Displaying 1 - 21 of 21
-
Ekerdt, C., Menks, W. M., Fernández, G., McQueen, J. M., Takashima, A., & Janzen, G. (2024). White matter connectivity linked to novel word learning in children. Brain Structure & Function, 229, 2461-2477. doi:10.1007/s00429-024-02857-6.
Abstract
Children and adults are excellent word learners. Increasing evidence suggests that the neural mechanisms that allow us to learn words change with age. In a recent fMRI study from our group, several brain regions exhibited age-related differences when accessing newly learned words in a second language (L2; Takashima et al. Dev Cogn Neurosci 37, 2019). Namely, while the Teen group (aged 14–16 years) activated more left frontal and parietal regions, the Young group (aged 8–10 years) activated right frontal and parietal regions. In the current study we analyzed the structural connectivity data from the aforementioned study, examining the white matter connectivity of the regions that showed age-related functional activation differences. Age group differences in streamline density as well as correlations with L2 word learning success and their interaction were examined. The Teen group showed stronger connectivity than the Young group in the right arcuate fasciculus (AF). Furthermore, white matter connectivity and memory for L2 words across the two age groups correlated in the left AF and the right anterior thalamic radiation (ATR) such that higher connectivity in the left AF and lower connectivity in the right ATR was related to better memory for L2 words. Additionally, connectivity in the area of the right AF that exhibited age-related differences predicted word learning success. The finding that across the two age groups, stronger connectivity is related to better memory for words lends further support to the hypothesis that the prolonged maturation of the prefrontal cortex, here in the form of structural connectivity, plays an important role in the development of memory.Additional information
supplementary information -
Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.
Abstract
Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.Additional information
network analysis of dataset A and B -
Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.
Abstract
We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl. -
Koning, M. E. E., Wyman, N. K., Menks, W. M., Ekerdt, C., Fernández, G., Kidd, E., Lemhöfer, K., McQueen, J. M., & Janzen, G. (2024). The relationship between brain structure and function during novel grammar learning across development. Cerebral Cortex, 34(12): bhae488. doi:10.1093/cercor/bhae488.
Abstract
In this study, we explored the relationship between developmental differences in gray matter structure and grammar learning ability in 159 Dutch-speaking individuals (8 to 25 yr). The data were collected as part of a recent large-scale functional MRI study (Menks WM, Ekerdt C, Lemhöfer K, Kidd E, Fernández G, McQueen JM, Janzen G. Developmental changes in brain activation during novel grammar learning in 8–25-year-olds. Dev Cogn Neurosci. 2024;66:101347. https://doi.org/10.1016/j.dcn.2024.101347) in which participants implicitly learned Icelandic morphosyntactic rules and performed a grammaticality judgment task in the scanner. Behaviorally, Menks et al. (2024) showed that grammaticality judgment task performance increased steadily from 8 to 15.4 yr, after which age had no further effect. We show in the current study that this age-related grammaticality judgment task performance was negatively related to cortical gray matter volume and cortical thickness in many clusters throughout the brain. Hippocampal volume was positively related to age-related grammaticality judgment task performance and L2 (English) vocabulary knowledge. Furthermore, we found that grammaticality judgment task performance, L2 grammar proficiency, and L2 vocabulary knowledge were positively related to gray matter maturation within parietal regions, overlapping with the functional MRI clusters that were reported previously in Menks et al. (2024) and which showed increased brain activation in relation to grammar learning. We propose that this overlap in functional and structural results indicates that brain maturation in parietal regions plays an important role in second language learning.Additional information
supplements -
Menks, W. M., Ekerdt, C., Lemhöfer, K., Kidd, E., Fernández, G., McQueen, J. M., & Janzen, G. (2024). Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Developmental Cognitive Neuroscience, 66: 101347. doi:10.1016/j.dcn.2024.101347.
Abstract
While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.Additional information
supplement -
Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.
Abstract
Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.Additional information
supplementary material -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.
Abstract
Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers. -
Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.Abstract
Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time. -
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 432-436).Abstract
The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception. -
Wagner, M. A., Broersma, M., McQueen, J. M., Van Hout, R., & Lemhöfer, K. (2024). The case for a quantitative approach to the study of nonnative accent features. Language and Speech. Advance online publication. doi:10.1177/00238309241256653.
Abstract
Research with nonnative speech spans many different linguistic branches and topics. Most studies include one or a few well-known features of a particular accent. However, due to a lack of empirical studies, little is known about how common these features are among nonnative speakers or how uncommon they are among native speakers. Moreover, it remains to be seen whether findings from such studies generalize to lesser-known features. Here, we demonstrate a quantitative approach to study nonnative accent features using Dutch-accented English as an example. By analyzing the phonetic distances between transcriptions of speech samples, this approach can identify the features that best distinguish nonnative from native speech. In addition, we describe a method to test hypotheses about accent features by checking whether the prevalence of the features overall varies between native and nonnative speakers. Furthermore, we include English speakers from the United States and United Kingdom and native Dutch speakers from Belgium and The Netherlands to address the issue of regional accent variability in both the native and target language. We discuss the results concerning three observed features. Overall, the results provide empirical support for some well-known features of Dutch-accented English, but suggest that others may be infrequent among nonnatives or in fact frequent among natives. In addition, the findings reveal potentially new accent features, and factors that may modulate the expression of known features. Our study demonstrates a fruitful approach to study nonnative accent features that has the potential to expand our understanding of the phenomenon of accent. -
Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.
Abstract
Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible. -
McQueen, J. M. (2009). Al sprekende leert men [Inaugural lecture]. Arnhem: Drukkerij Roos en Roos.
Abstract
Rede uitgesproken bij de aanvaarding van het ambt van hoogleraar Leren en plasticiteit aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 1 oktober 2009 -
McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.
Abstract
The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception. -
Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.
Abstract
Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken. -
Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.
Abstract
Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes. -
Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.
Abstract
Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition. -
Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.
Abstract
This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns. -
Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).
Abstract
Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required. -
Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.
Abstract
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2008). The strength of stress-related lexical competition depends on the presence of first-syllable stress. In Proceedings of Interspeech 2008 (pp. 1954-1954).
Abstract
Dutch listeners' looks to printed words were tracked while they listened to instructions to click with their mouse on one of them. When presented with targets from word pairs where the first two syllables were segmentally identical but differed in stress location, listeners used stress information to recognize the target before segmental information disambiguated the words. Furthermore, the amount of lexical competition was influenced by the presence or absence of word-initial stress. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Lexical stress information modulates the time-course of spoken-word recognition. In Proceedings of Acoustics' 08 (pp. 3183-3188).
Abstract
Segmental as well as suprasegmental information is used by Dutch listeners to recognize words. The time-course of the effect of suprasegmental stress information on spoken-word recognition was investigated in a previous study, in which we tracked Dutch listeners' looks to arrays of four printed words as they listened to spoken sentences. Each target was displayed along with a competitor that did not differ segmentally in its first two syllables but differed in stress placement (e.g., 'CENtimeter' and 'sentiMENT'). The listeners' eye-movements showed that stress information is used to recognize the target before distinct segmental information is available. Here, we examine the role of durational information in this effect. Two experiments showed that initial-syllable duration, as a cue to lexical stress, is not interpreted dependent on the speaking rate of the preceding carrier sentence. This still held when other stress cues like pitch and amplitude were removed. Rather, the speaking rate of the preceding carrier affected the speed of word recognition globally, even though the rate of the target itself was not altered. Stress information modulated lexical competition, but did so independently of the rate of the preceding carrier, even if duration was the only stress cue present.
Share this page