Displaying 1 - 26 of 26
-
Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Speech input causes listeners to activate multiple
candidate words which then compete with one
another. These include onset competitors, that share a
beginning (bumper, butter), but also, counterintuitively,
rhyme competitors, sharing an ending
(bumper, jumper). In L1, competition is typically
stronger for onset than for rhyme. In L2, onset
competition has been attested but rhyme competition
has heretofore remained largely unexamined. We
assessed L1 (Dutch) and L2 (English) word
recognition by the same late-bilingual individuals. In
each language, eye gaze was recorded as listeners
heard sentences and viewed sets of drawings: three
unrelated, one depicting an onset or rhyme competitor
of a word in the input. Activation patterns revealed
substantial onset competition but no significant
rhyme competition in either L1 or L2. Rhyme
competition may thus be a “luxury” feature of
maximally efficient listening, to be abandoned when
resources are scarcer, as in listening by late
bilinguals, in either language. -
Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Part of the remarkable efficiency of listening is
accommodation to unfamiliar talkers’ specific
pronunciations by retuning of phonemic intercategory
boundaries. Such retuning occurs in second
(L2) as well as first language (L1); however, recent
research with emigrés revealed successful adaptation
in the environmental L2 but, unprecedentedly, not in
L1 despite continuing L1 use. A possible explanation
involving relative exposure to novel talkers is here
tested in heritage language users with Mandarin as
family L1 and English as environmental language. In
English, exposure to an ambiguous sound in
disambiguating word contexts prompted the expected
adjustment of phonemic boundaries in subsequent
categorisation. However, no adjustment occurred in
Mandarin, again despite regular use. Participants
reported highly asymmetric interlocutor counts in the
two languages. We conclude that successful retuning
ability requires regular exposure to novel talkers in
the language in question, a criterion not met for the
emigrés’ or for these heritage users’ L1. -
Joo, H., Jang, J., Kim, S., Cho, T., & Cutler, A. (2019). Prosodic structural effects on coarticulatory vowel nasalization in Australian English in comparison to American English. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 835-839). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
This study investigates effects of prosodic factors (prominence, boundary) on coarticulatory Vnasalization in Australian English (AusE) in CVN and NVC in comparison to those in American English
(AmE). As in AmE, prominence was found to
lengthen N, but to reduce V-nasalization, enhancing N’s nasality and V’s orality, respectively (paradigmatic contrast enhancement). But the prominence effect in CVN was more robust than that in AmE. Again similar to findings in AmE, boundary
induced a reduction of N-duration and V-nasalization phrase-initially (syntagmatic contrast enhancement), and increased the nasality of both C and V phrasefinally.
But AusE showed some differences in terms
of the magnitude of V nasalization and N duration. The results suggest that the linguistic contrast enhancements underlie prosodic-structure modulation of coarticulatory V-nasalization in
comparable ways across dialects, while the fine phonetic detail indicates that the phonetics-prosody interplay is internalized in the individual dialect’s phonetic grammar. -
Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.
Abstract
All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association -
Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.
Abstract
Several studies have documented that international adoptees, who in early years have
experienced a change from a language used in their birth country to a new language
in an adoptive country, benefit from the limited early exposure to the birth language
when relearning that language’s sounds later in life. The adoptees’ relearning advantages
have been argued to be conferred by lasting birth-language knowledge obtained from
the early exposure. However, it is also plausible to assume that the advantages may
arise from adoptees’ superior ability to learn language sounds in general, as a result
of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
early in life. If this is the case, then the adoptees’ relearning benefits should generalize
to previously unheard language sounds, rather than be limited to their birth-language
sounds. In the present study, adult Korean adoptees in the Netherlands and matched
Dutch-native controls were trained on identifying a Japanese length distinction to which
they had never been exposed before. The adoptees and Dutch controls did not differ
on any test carried out before, during, or after the training, indicating that observed
adoptee advantages for birth-language relearning do not generalize to novel, previously
unheard language sounds. The finding thus fails to support the suggestion that
birth-language relearning advantages may arise from enhanced ability to learn language
sounds in general conferred by early experience in multiple languages. Rather, our
finding supports the original contention that such advantages involve memory traces
obtained before adoption -
Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (
Eds. ), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.Abstract
In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2. -
Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).Abstract
Using a phoneme detection task, the present series of
experiments examines whether listeners can entrain to
different combinations of prosodic cues to predict where focus
will fall in an utterance. The stimuli were recorded by four
female native speakers of Australian English who happened to
have used different prosodic cues to produce sentences with
prosodic focus: a combination of duration cues, mean and
maximum F0, F0 range, and longer pre-target interval before
the focused word onset, only mean F0 cues, only pre-target
interval, and only duration cues. Results revealed that listeners
can entrain in almost every condition except for where
duration was the only reliable cue. Our findings suggest that
listeners are flexible in the cues they use for focus processing. -
Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).Abstract
Listeners adapt rapidly to previously unheard talkers by
adjusting phoneme categories using lexical knowledge, in a
process termed lexically-guided perceptual learning. Although
this is firmly established for listening in the native language
(L1), perceptual flexibility in second languages (L2) is as yet
less well understood. We report two experiments examining L1
and L2 perceptual learning, the first in Mandarin-English late
bilinguals, the second in Australian learners of Mandarin. Both
studies showed stronger learning in L1; in L2, however,
learning appeared for the English-L1 group but not for the
Mandarin-L1 group. Phonological mapping differences from
the L1 to the L2 are suggested as the reason for this result. -
Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (
Ed. ), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.Abstract
Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions. -
Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.
Abstract
Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing -
Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.
Abstract
We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.Additional information
supporting information -
Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
-
Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.
Abstract
Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress. -
Cutler, A. (2002). Phonological processing: Comments on Pierrehumbert, Moates et al., Kubozono, Peperkamp & Dupoux, and Bradlow. In C. Gussenhoven, & N. Warner (
Eds. ), Papers in Laboratory Phonology VII (pp. 275-296). Berlin: Mouton de Gruyter. -
Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.
Abstract
Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.Additional information
https://www.mpi.nl/world/persons/private/anne/wrdrec&goroawase.html -
Cutler, A., & Norris, D. (2002). The role of strong syllables in segmentation for lexical access. In G. T. Altmann (
Ed. ), Psycholinguistics: Critical concepts in psychology (pp. 157-177). London: Routledge. -
Cutler, A., Mehler, J., Norris, D., & Segui, J. (2002). The syllable's differing role in the segmentation of French and English. In G. T. Altmann (
Ed. ), Psycholinguistics: Critical concepts in psychology (pp. 115-135). London: Routledge.Abstract
Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure. -
Cutler, A., McQueen, J. M., Jansonius, M., & Bayerl, S. (2002). The lexical statistics of competitor activation in spoken-word recognition. In C. Bow (
Ed. ), Proceedings of the 9th Australian International Conference on Speech Science and Technology (pp. 40-45). Canberra: Australian Speech Science and Technology Association (ASSTA).Abstract
The Possible Word Constraint is a proposed mechanism whereby listeners avoid recognising words spuriously embedded in other words. It applies to words leaving a vowelless residue between their edge and the nearest known word or syllable boundary. The present study tests the usefulness of this constraint via lexical statistics of both English and Dutch. The analyses demonstrate that the constraint removes a clear majority of embedded words in speech, and thus can contribute significantly to the efficiency of human speech recognition -
Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.
Abstract
Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.Additional information
https://www.mpi.nl/world/persons/private/anne/target.html -
Cutler, A. (2002). Lexical access. In L. Nadel (
Ed. ), Encyclopedia of cognitive science (pp. 858-864). London: Nature Publishing Group. -
Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2002). Le rôle de la syllable. In E. Dupoux (
Ed. ), Les langages du cerveau: Textes en l’honneur de Jacques Mehler (pp. 185-197). Paris: Odile Jacob. -
Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.
Abstract
Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability). -
Kearns, R. K., Norris, D., & Cutler, A. (2002). Syllable processing in English. In Proceedings of the 7th International Conference on Spoken Language Processing [ICSLP 2002] (pp. 1657-1660).
Abstract
We describe a reaction time study in which listeners detected word or nonword syllable targets (e.g. zoo, trel) in sequences consisting of the target plus a consonant or syllable residue (trelsh, trelshek). The pattern of responses differed from an earlier word-spotting study with the same material, in which words were always harder to find if only a consonant residue remained. The earlier results should thus not be viewed in terms of syllabic parsing, but in terms of a universal role for syllables in speech perception; words which are accidentally present in spoken input (e.g. sell in self) can be rejected when they leave a residue of the input which could not itself be a word. -
Kuijpers, C., Van Donselaar, W., & Cutler, A. (2002). Perceptual effects of assimilation-induced violation of final devoicing in Dutch. In J. H. L. Hansen, & B. Pellum (
Eds. ), The 7th International Conference on Spoken Language Processing (pp. 1661-1664). Denver: ICSA.Abstract
Voice assimilation in Dutch is an optional phonological rule which changes the surface forms of words and in doing so may violate the otherwise obligatory phonological rule of syllablefinal devoicing. We report two experiments examining the influence of voice assimilation on phoneme processing, in lexical compound words and in noun-verb phrases. Processing was not impaired in appropriate assimilation contexts across morpheme boundaries, but was impaired when devoicing was violated (a) in an inappropriate non-assimilatory) context, or (b) across a syntactic boundary. -
Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.
Abstract
In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access. -
Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.
Abstract
Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation.
Share this page