Displaying 1 - 33 of 33
-
Acheson, D. J., Ganushchak, L. Y., Christoffels, I. K., & Hagoort, P. (2012). Conflict monitoring in speech production: Physiological evidence from bilingual picture naming. Brain and Language, 123, 131 -136. doi:10.1016/j.bandl.2012.08.008.
Abstract
Self-monitoring in production is critical to correct performance, and recent accounts suggest that such monitoring may occur via the detection of response conflict. The error-related negativity (ERN) is a response-locked event-related potential (ERP) that is sensitive to response conflict. The present study examines whether response conflict is detected in production by exploring a situation where multiple outputs are activated: the bilingual naming of form-related equivalents (i.e. cognates). ERPs were recorded while German-Dutch bilinguals named pictures in their first and second languages. Although cognates were named faster than non-cognates, response conflict was evident in the form of a larger ERN-like response for cognates and adaptation effects on naming, as the magnitude of cognate facilitation was smaller following the naming of cognates. Given that signals of response conflict are present during correct naming, the present results suggest that such conflict may serve as a reliable signal for monitoring in speech production. -
Benders, T., Escudero, P., & Sjerps, M. J. (2012). The interrelation between acoustic context effects and available response categories in speech sound categorization. Journal of the Acoustical Society of America, 131, 3079-3087. doi:10.1121/1.3688512.
Abstract
In an investigation of contextual influences on sound categorization, 64 Peruvian Spanish listeners categorized vowels on an /i/ to /e/ continuum. First, to measure the influence of the stimulus range (broad acoustic context) and the preceding stimuli (local acoustic context), listeners were presented with different subsets of the Spanish /i/-/e/ continuum in separate blocks. Second, the influence of the number of response categories was measured by presenting half of the participants with /i/ and /e/ as responses, and the other half with /i/, /e/, /a/, /o/, and /u/. The results showed that the perceptual category boundary between /i/ and /e/ shifted depending on the stimulus range and that the formant values of locally preceding items had a contrastive influence. Categorization was less susceptible to broad and local acoustic context effects, however, when listeners were presented with five rather than two response options. Vowel categorization depends not only on the acoustic properties of the target stimulus, but also on its broad and local acoustic context. The influence of such context is in turn affected by the number of internal referents that are available to the listener in a task. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571. doi:10.1080/01690965.2011.555268.
Abstract
Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter’’ for ‘‘computer’’) modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer’’), a competitor similar to the canonical form (e.g., ‘‘companion’’), one similar to the reduced form (e.g.,
‘‘pupil’’), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed
their attention to a similar degree to both competitors independent of the
target’s spoken form. In Experiment 2, we excluded reduced forms and
presented canonical forms only. In such a listening situation, participants
showed a clear preference for the ‘‘canonical form’’ competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude
that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations. Quarterly Journal of Experimental Psychology, 65, 2193-2220. doi:10.1080/17470218.2012.693109.
Abstract
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts. -
Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2012). From gr8 to great: Lexical access to SMS shortcuts. Frontiers in Psychology, 3, 150. doi:10.3389/fpsyg.2012.00150.
Abstract
Many contemporary texts include shortcuts, such as cu or phones4u. The aim of this study was to investigate how the meanings of shortcuts are retrieved. A primed lexical decision paradigm was used with shortcuts and the corresponding words as primes. The target word was associatively related to the meaning of the whole prime (cu/see you – goodbye), to a component of the prime (cu/see you – look), or unrelated to the prime. In Experiment 1, primes were presented for 57 ms. For both word and shortcut primes, responses were faster to targets preceded by whole-related than by unrelated primes. No priming from component-related primes was found. In Experiment 2, the prime duration was 1000 ms. The priming effect seen in Experiment 1 was replicated. Additionally, there was priming from component-related word primes, but not from component-related shortcut primes. These results indicate that the meanings of shortcuts can be retrieved without translating them first into corresponding words. -
Haderlein, T., Moers, C., Möbius, B., & Nöth, E. (2012). Automatic rating of hoarseness by text-based cepstral and prosodic evaluation. In P. Sojka, A. Horák, I. Kopecek, & K. Pala (
Eds. ), Proceedings of the 15th International Conference on Text, Speech and Dialogue (TSD 2012) (pp. 573-580). Heidelberg: Springer.Abstract
The standard for the analysis of distorted voices is perceptual rating of read-out texts or spontaneous speech. Automatic voice evaluation, however, is usually done on stable sections of sustained vowels. In this paper, text-based and established vowel-based analysis are compared with respect to their ability to measure hoarseness and its subclasses. 73 hoarse patients (48.3±16.8 years) uttered the vowel /e/ and read the German version of the text “The North Wind and the Sun”. Five speech therapists and physicians rated roughness, breathiness, and hoarseness according to the German RBH evaluation scheme. The best human-machine correlations were obtained for measures based on the Cepstral Peak Prominence (CPP; up to |r | = 0.73). Support Vector Regression (SVR) on CPP-based measures and prosodic features improved the results further to r ≈0.8 and confirmed that automatic voice evaluation should be performed on a text recording. -
Hanulikova, A., Dediu, D., Fang, Z., Basnakova, J., & Huettig, F. (2012). Individual differences in the acquisition of a complex L2 phonology: A training study. Language Learning, 62(Supplement S2), 79-109. doi:10.1111/j.1467-9922.2012.00707.x.
Abstract
Many learners of a foreign language (L2) struggle to correctly pronounce newly-learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of the L2 acquisition can inform us about L2 learning in general and individual differences in particular. To this end, adult Dutch native speakers were trained on Slovak words with complex consonant clusters (e.g., pstruh /pstrux/‘trout’, štvrť /ʃtvrc/ ‘quarter’) using auditory and orthographic input. In the same session following training, participants were tested on a battery of L2 perception and production tasks. The battery of L2 tests was repeated twice more with one week between each session. In the first session, an additional battery of control tests was used to test participants’ native language (L1) skills. Overall, in line with some previous research, participants showed only weak learning effects across the L2 perception tasks. However, there were considerable individual differences across all L2 tasks, which remained stable across sessions. Only two participants showed overall high L2 production performance that fell within 2 standard deviations of the mean ratings obtained for an L1 speaker. The mispronunciation detection task was the only perception task which significantly predicted production performance in the final session. We conclude by discussing several recommendations for future L2 learning studies. -
Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.
Abstract
The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical. -
Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.
Abstract
In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech. -
Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.
Abstract
We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning. -
Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.
Abstract
Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation. -
Konopka, A. E. (2012). Planning ahead: How recent experience with structures and words changes the scope of linguistic planning. Journal of Memory and Language, 66, 143-162. doi:10.1016/j.jml.2011.08.003.
Abstract
The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation. -
Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.
Abstract
The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.Additional information
Lesage_Suppl_Information.pdf -
Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.
Abstract
Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers. -
McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.
Abstract
An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes. -
Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.
Abstract
Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast. -
Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.
Abstract
We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates. -
Roberts, L., & Meyer, A. S. (
Eds. ). (2012). Individual differences in second language acquisition [Special Issue]. Language Learning, 62(Supplement S2). -
Roberts, L., & Meyer, A. S. (2012). Individual differences in second language learning: Introduction. Language Learning, 62(Supplement S2), 1-4. doi:10.1111/j.1467-9922.2012.00703.x.
Abstract
First paragraph: The topic of the workshop from which this volume comes, “Individual Differences in Second Language Learning,” is timely and important for both practical and theoretical reasons. The practical reasons are obvious: While many people have some knowledge of a second or further language, there is enormous variability in how well they know these languages. Much of this variability is, of course, likely to be due to differences in the time spent studying or being immersed in the language, but even in similar learning environments learners differ greatly in how quickly they pick up a language and in their ultimate level of proficiency. -
Shao, Z., Roelofs, A., & Meyer, A. S. (2012). Sources of individual differences in the speed of naming objects and actions: The contribution of executive control. Quarterly Journal of Experimental Psychology, 65, 1927-1944. doi:10.1080/17470218.2012.670252.
Abstract
We examined the contribution of executive control to individual differences in response time (RT) for naming objects and actions. Following Miyake, Friedman, Emerson, Witzki, Howerter, and Wager (2000), executive control was assumed to include updating, shifting, and inhibiting abilities, which were assessed using operation-span, task switching, and stop-signal tasks, respectively. Study 1 showed that updating ability was significantly correlated with the mean RT of action naming, but not of object naming. This finding was replicated in Study 2 using a larger stimulus set. Inhibiting ability was significantly correlated with the mean RT of both action and object naming, whereas shifting ability was not correlated with the mean naming RTs. Ex-Gaussian analyses of the RT distributions revealed that updating ability was correlated with the distribution tail of both action and object naming, whereas inhibiting ability was correlated with the leading edge of the distribution for action naming and the tail for object naming. Shifting ability provided no independent contribution. These results indicate that the executive control abilities of updating and inhibiting contribute to the speed of naming objects and actions, although there are differences in the way and extent these abilities are involved. -
Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2012). Hemispheric differences in the effects of context on vowel perception. Brain and Language, 120, 401-405. doi:10.1016/j.bandl.2011.12.012.
Abstract
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts. -
Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Extrinsic normalization for vocal tracts depends on the signal, not on attention. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 394-397).
Abstract
When perceiving vowels, listeners adjust to speaker-specific vocal-tract characteristics (such as F1) through "extrinsic vowel normalization". This effect is observed as a shift in the location of categorization boundaries of vowel continua. Similar effects have been found with non-speech. Non-speech materials, however, have consistently led to smaller effect-sizes, perhaps because of a lack of attention to non-speech. The present study investigated this possibility. Non-speech materials that had previously been shown to elicit reduced normalization effects were tested again, with the addition of an attention manipulation. The results show that increased attention does not lead to increased normalization effects, suggesting that vowel normalization is mainly determined by bottom-up signal characteristics. -
Adank, P., & Janse, E. (2010). Comprehension of a novel accent by young and older listeners. Psychology and Aging, 25(3), 736-740. doi:10.1037/a0020054.
Abstract
The authors investigated perceptual learning of a novel accent in young and older listeners through
measuring speech reception thresholds (SRTs) using speech materials spoken in a novel—unfamiliar—
accent. Younger and older listeners adapted to this accent, but older listeners showed poorer comprehension
of the accent. Furthermore, perceptual learning differed across groups: The older listeners
stopped learning after the first block, whereas younger listeners showed further improvement with longer
exposure. Among the older participants, hearing acuity predicted the SRT as well as the effect of the
novel accent on SRT. Finally, a measure of executive function predicted the impact of accent on SRT.Additional information
http://supp.apa.org/psycarticles/supplemental/a0020054/a0020054_supp.html -
Berends, S., Veenstra, A., & Van Hout, A. (2010). 'Nee, ze heeft er twee': Acquisition of the Dutch quantitative 'er'. Groninger Arbeiten zur Germanistischen Linguistik, 51, 1-7. Retrieved from http://irs.ub.rug.nl/dbi/4ef4a0b3eafcb.
Abstract
We present the first study on the acquisition of the Dutch quantitative pronoun er in sentences such as de vrouw draagt er drie ‘the woman is carrying three.’ There is a large literature on Dutch children’s interpretation of pronouns and a few recent production studies, all specifically looking at 3rd person singular pronouns and the so-called Delay of Principle B effect (Coopmans & Philip, 1996; Koster, 1993; Spenader, Smits and Hendriks, 2009). However, no one has studied children’s use of quantitative er. Dutch is the only Germanic language with such a pronoun. -
Brouwer, S., Mitterer, H., & Huettig, F. (2010). Shadowing reduced speech and alignment. Journal of the Acoustical Society of America, 128(1), EL32-EL37. doi:10.1121/1.3448022.
Abstract
This study examined whether listeners align to reduced speech. Participants were asked to shadow sentences from a casual speech corpus containing canonical and reduced targets. Participants' productions showed alignment: durations of canonical targets were longer than durations of reduced targets; and participants often imitated the segment types (canonical versus reduced) in both targets. The effect sizes were similar to previous work on alignment. In addition, shadowed productions were overall longer in duration than the original stimuli and this effect was larger for reduced than canonical targets. A possible explanation for this finding is that listeners reconstruct canonical forms from reduced forms. -
Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.
Abstract
In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing. -
Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.
Abstract
Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception. -
Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.
Abstract
Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion. -
Malpass, D., & Meyer, A. S. (2010). The time course of name retrieval during multiple-object naming: Evidence from extrafoveal-on-foveal effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 523-537. doi:10.1037/a0018522.
Abstract
The goal of the study was to examine whether speakers naming pairs of objects would retrieve the names of the objects in parallel or in equence. To this end, we recorded the speakers’ eye movements and determined whether the difficulty of retrieving the name of the 2nd object affected the duration of the gazes to the 1st object. Two experiments, which differed in the spatial arrangement of the objects, showed that the speakers looked longer at the 1st object when the name of the 2nd object was easy than when it was more difficult to retrieve. Thus, the easy 2nd-object names interfered more with the processing of the 1st object than the more difficult 2nd-object names. In the 3rd experiment, the processing of the 1st object was rendered more difficult by presenting it upside down. No effect of 2nd-object difficulty on the gaze duration for the 1st object was found. These results suggest that speakers can retrieve the names of a foveated and an extrafoveal object in parallel, provided that the processing of the foveated object is not too demanding -
Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
-
Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.
Abstract
Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays. -
Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.
Abstract
When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection. -
Veenstra, A., Berends, S., & Van Hout, A. (2010). Acquisition of object and quantitative pronouns in Dutch: Kinderen wassen 'hem' voordat ze 'er' twee meenemen. Groninger Arbeiten zur Germanistischen Linguistik, 51, 9-25.
Abstract
1. Introduction Despite a large literature on Dutch children’s pronoun interpretation, relatively little is known about their production. In this study we elicited pronouns in two syntactic environments: object pronouns and quantitative er (Q-er). The goal was to see how different types of pronouns develop, in particular, whether acquisition depends on their different syntactic properties. Our Dutch data add another type of language to the acquisition literature on object clitics in the Romance languages. Moreover, we present another angle on this discussion by comparing object pronouns and Q-er.
Share this page