Displaying 1 - 11 of 11
-
Adank, P., & McQueen, J. M. (2007). The effect of an unfamiliar regional accent on spoken-word comprehension. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1925-1928). Dudweiler: Pirrot.Abstract
This study aimed first to determine whether there is a delay associated with processing words in an unfamiliar regional accent compared to words in a familiar regional accent, and second to establish whether short-term exposure to an unfamiliar accent affects the speed and accuracy of comprehension of words spoken in that accent. Listeners performed an animacy decision task for words spoken in their own and in an unfamiliar accent. Next, they were exposed to approximately 20 minutes of speech in one of these two accents. After exposure, they repeated the animacy decision task. Results showed a considerable delay in word processing for the unfamiliar accent, but no effect of short-term exposure. -
Andics, A., McQueen, J. M., & Van Turennout, M. (2007). Phonetic content influences voice discriminability. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1829-1832). Dudweiler: Pirrot.Abstract
We present results from an experiment which shows that voice perception is influenced by the phonetic content of speech. Dutch listeners were presented with thirteen speakers pronouncing CVC words with systematically varying segmental content, and they had to discriminate the speakers’ voices. Results show that certain segments help listeners discriminate voices more than other segments do. Voice information can be extracted from every segmental position of a monosyllabic word and is processed rapidly. We also show that although relative discriminability within a closed set of voices appears to be a stable property of a voice, it is also influenced by segmental cues – that is, perceived uniqueness of a voice depends on what that voice says. -
Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.
Abstract
We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition. -
Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi:10.1016/j.jml.2007.02.001.
Abstract
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, `beaker', for example, the display contained phonological (a beaver, bever), shape (a bobbin, klos), and semantic (a fork, vork) competitors. When the display appeared at sentence onset, fixations to phonological competitors preceded fixations to shape and semantic competitors. When display onset was 200 ms before (e.g.) beker, fixations were directed to shape and then semantic competitors, but not phonological competitors. In Experiments 3 and 4, displays contained the printed names of the previously-pictured entities; only phonological competitors were fixated preferentially. These findings suggest that retrieval of phonological, shape and semantic knowledge in the spoken-word and picture-recognition systems is cascaded, and that visual attention shifts are co-determined by the time-course of retrieval of all three knowledge types and by the nature of the information in the visual environment. -
Jesse, A., & McQueen, J. M. (2007). Prelexical adjustments to speaker idiosyncracies: Are they position-specific? In H. van Hamme, & R. van Son (
Eds. ), Proceedings of Interspeech 2007 (pp. 1597-1600). Adelaide: Causal Productions.Abstract
Listeners use lexical knowledge to adjust their prelexical representations of speech sounds in response to the idiosyncratic pronunciations of particular speakers. We used an exposure-test paradigm to investigate whether this type of perceptual learning transfers across syllabic positions. No significant learning effect was found in Experiment 1, where exposure sounds were onsets and test sounds were codas. Experiments 2-4 showed that there was no learning even when both exposure and test sounds were onsets. But a trend was found when exposure sounds were codas and test sounds were onsets (Experiment 5). This trend was smaller than the robust effect previously found for the coda-to-coda case. These findings suggest that knowledge about idiosyncratic pronunciations may be position specific: Knowledge about how a speaker produces sounds in one position, if it can be acquired at all, influences perception of sounds in that position more strongly than of sounds in another position. -
Jesse, A., McQueen, J. M., & Page, M. (2007). The locus of talker-specific effects in spoken-word recognition. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1921-1924). Dudweiler: Pirrot.Abstract
Words repeated in the same voice are better recognized than when they are repeated in a different voice. Such findings have been taken as evidence for the storage of talker-specific lexical episodes. But results on perceptual learning suggest that talker-specific adjustments concern sublexical representations. This study thus investigates whether voice-specific repetition effects in auditory lexical decision are lexical or sublexical. The same critical set of items in Block 2 were, depending on materials in Block 1, either same-voice or different-voice word repetitions, new words comprising re-orderings of phonemes used in the same voice in Block 1, or new words with previously unused phonemes. Results show a benefit for words repeated by the same talker, and a smaller benefit for words consisting of phonemes repeated by the same talker. Talker-specific information thus appears to influence word recognition at multiple representational levels. -
Jesse, A., & McQueen, J. M. (2007). Visual lexical stress information in audiovisual spoken-word recognition. In J. Vroomen, M. Swerts, & E. Krahmer (
Eds. ), Proceedings of the International Conference on Auditory-Visual Speech Processing 2007 (pp. 162-166). Tilburg: University of Tilburg.Abstract
Listeners use suprasegmental auditory lexical stress information to resolve the competition words engage in during spoken-word recognition. The present study investigated whether (a) visual speech provides lexical stress information, and, more importantly, (b) whether this visual lexical stress information is used to resolve lexical competition. Dutch word pairs that differ in the lexical stress realization of their first two syllables, but not segmentally (e.g., 'OCtopus' and 'okTOber'; capitals marking primary stress) served as auditory-only, visual-only, and audiovisual speech primes. These primes either matched (e.g., 'OCto-'), mismatched (e.g., 'okTO-'), or were unrelated to (e.g., 'maCHI-') a subsequent printed target (octopus), which participants had to make a lexical decision to. To the degree that visual speech contains lexical stress information, lexical decisions to printed targets should be modulated through the addition of visual speech. Results show, however, no evidence for a role of visual lexical stress information in audiovisual spoken-word recognition. -
McQueen, J. M., & Viebahn, M. C. (2007). Tracking recognition of spoken words by tracking looks to printed words. Quarterly Journal of Experimental Psychology, 60(5), 661-671. doi:10.1080/17470210601183890.
Abstract
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition. -
McQueen, J. M. (2007). Eight questions about spoken-word recognition. In M. G. Gaskell (
Ed. ), The Oxford handbook of psycholinguistics (pp. 37-53). Oxford: Oxford University Press.Abstract
This chapter is a review of the literature in experimental psycholinguistics on spoken word recognition. It is organized around eight questions. 1. Why are psycholinguists interested in spoken word recognition? 2. What information in the speech signal is used in word recognition? 3. Where are the words in the continuous speech stream? 4. Which words did the speaker intend? 5. When, as the speech signal unfolds over time, are the phonological forms of words recognized? 6. How are words recognized? 7. Whither spoken word recognition? 8. Who are the researchers in the field? -
Mitterer, H., & McQueen, J. M. (2007). Tracking perception of pronunciation variation by tracking looks to printed words: The case of word-final /t/. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1929-1932). Dudweiler: Pirrot.Abstract
We investigated perception of words with reduced word-final /t/ using an adapted eyetracking paradigm. Dutch listeners followed spoken instructions to click on printed words which were accompanied on a computer screen by simple shapes (e.g., a circle). Targets were either above or next to their shapes, and the shapes uniquely identified the targets when the spoken forms were ambiguous between words with or without final /t/ (e.g., bult, bump, vs. bul, diploma). Analysis of listeners’ eye-movements revealed, in contrast to earlier results, that listeners use the following segmental context when compensating for /t/-reduction. Reflecting that /t/-reduction is more likely to occur before bilabials, listeners were more likely to look at the /t/-final words if the next word’s first segment was bilabial. This result supports models of speech perception in which prelexical phonological processes use segmental context to modulate word recognition. -
Stevens, M. A., McQueen, J. M., & Hartsuiker, R. J. (2007). No lexically-driven perceptual adjustments of the [x]-[h] boundary. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1897-1900). Dudweiler: Pirrot.Abstract
Listeners can make perceptual adjustments to phoneme categories in response to a talker who consistently produces a specific phoneme ambiguously. We investigate here whether this type of perceptual learning is also used to adapt to regional accent differences. Listeners were exposed to words produced by a Flemish talker whose realization of [x℄or [h℄ was ambiguous (producing [x℄like [h℄is a property of the West-Flanders regional accent). Before and after exposure they categorized a [x℄-[h℄continuum. For both Dutch and Flemish listeners there was no shift of the categorization boundary after exposure to ambiguous sounds in [x℄- or [h℄-biasing contexts. The absence of a lexically-driven learning effect for this contrast may be because [h℄is strongly influenced by coarticulation. As is not stable across contexts, it may be futile to adapt its representation when new realizations are heard
Share this page