Displaying 1 - 50 of 50
-
He, J. (2023). Coordination of spoken language production and comprehension: How speech production is affected by irrelevant background speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.
Abstract
There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.Additional information
supplementary text -
Bartolozzi, F. (2023). Repetita Iuvant? Studies on the role of repetition priming as a supportive mechanism during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Wu, M., Bosker, H. R., & Riecke, L. (2023). Sentential contextual facilitation of auditory word processing builds up during sentence tracking. Journal of Cognitive Neuroscience, 35(8), 1262 -1278. doi:10.1162/jocn_a_02007.
Abstract
While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses 1(auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top–down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams. -
Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
-
Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.
Abstract
Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages. -
Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.
Abstract
During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed. -
Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.
Abstract
Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.Additional information
raw data and analysis scripts -
Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.
Abstract
Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to. -
Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (
Ed. ), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.Abstract
Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research -
Ferreira, F., & Huettig, F. (2023). Fast and slow language processing: A window into dual-process models of cognition. [Open Peer commentary on De Neys]. Behavioral and Brain Sciences, 46: e121. doi:10.1017/S0140525X22003041.
Abstract
Our understanding of dual-process models of cognition may benefit from a consideration of language processing, as language comprehension involves fast and slow processes analogous to those used for reasoning. More specifically, De Neys's criticisms of the exclusivity assumption and the fast-to-slow switch mechanism are consistent with findings from the literature on the construction and revision of linguistic interpretations.
-
Garrido Rodriguez, G., Norcliffe, E., Brown, P., Huettig, F., & Levinson, S. C. (2023). Anticipatory processing in a verb-initial Mayan language: Eye-tracking evidence during sentence comprehension in Tseltal. Cognitive Science, 47(1): e13292. doi:10.1111/cogs.13219.
Abstract
We present a visual world eye-tracking study on Tseltal (a Mayan language) and investigate whether verbal information can be used to anticipate an upcoming referent. Basic word order in transitive sentences in Tseltal is Verb-Object-Subject (VOS). The verb is usually encountered first, making argument structure and syntactic information available at the outset, which should facilitate anticipation of the post-verbal arguments. Tseltal speakers listened to verb-initial sentences with either an object-predictive verb (e.g., ‘eat’) or a general verb (e.g., ‘look for’) (e.g., “Ya slo’/sle ta stukel on te kereme”, Is eating/is looking (for) by himself the avocado the boy/ “The boy is eating/is looking (for) an avocado by himself”) while seeing a visual display showing one potential referent (e.g., avocado) and three distractors (e.g., bag, toy car, coffee grinder). We manipulated verb type (predictive vs. general) and recorded participants' eye-movements while they listened and inspected the visual scene. Participants’ fixations to the target referent were analysed using multilevel logistic regression models. Shortly after hearing the predictive verb, participants fixated the target object before it was mentioned. In contrast, when the verb was general, fixations to the target only started to increase once the object was heard. Our results suggest that Tseltal hearers pre-activate semantic features of the grammatical object prior to its linguistic expression. This provides evidence from a verb-initial language for online incremental semantic interpretation and anticipatory processing during language comprehension. These processes are comparable to the ones identified in subject-initial languages, which is consistent with the notion that different languages follow similar universal processing principles. -
Hintz, F., Khoe, Y. H., Strauß, A., Psomakas, A. J. A., & Holler, J. (2023). Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. Cognitive, Affective and Behavioral Neuroscience, 23, 340-353. doi:10.3758/s13415-023-01074-8.
Abstract
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words. -
Hintz, F., Voeten, C. C., & Scharenborg, O. (2023). Recognizing non-native spoken words in background noise increases interference from the native language. Psychonomic Bulletin & Review, 30, 1549-1563. doi:10.3758/s13423-022-02233-7.
Abstract
Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition—especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.Additional information
table 2 target-absent items -
Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.
Abstract
Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.Additional information
Raw data and analysis code can be found here on OSF -
Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.
Abstract
We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals. -
Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.
Abstract
n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.Additional information
EEG data, experimental scripts, and analysis scripts -
Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.
Abstract
Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking. -
Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.
Abstract
While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
percentage-based frequency of use measures enabled a fine-grained description of language use.
L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
of language use for FL retention and sheds light on the complex interplay between language
use and other determinants of attrition. -
Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.
Abstract
Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.Files private
Request files -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2023). Syllable rate drives rate normalization, but is not the only factor. In R. Skarnitzl, & J. Volín (
Eds. ), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 56-60). Prague: Guarant International.Abstract
Speech is perceived relative to the speech rate in the context. It is unclear, however, what information listeners use to compute speech rate. The present study examines whether listeners use the number of
syllables per unit time (i.e., syllable rate) as a measure of speech rate, as indexed by subsequent vowel perception. We ran two rate-normalization experiments in which participants heard duration-matched word lists that contained either monosyllabic
vs. bisyllabic words (Experiment 1), or monosyllabic vs. trisyllabic pseudowords (Experiment 2). The participants’ task was to categorize an /ɑ-aː/ continuum that followed the word lists. The monosyllabic condition was perceived as slower (i.e., fewer /aː/ responses) than the bisyllabic and
trisyllabic condition. However, no difference was observed between bisyllabic and trisyllabic contexts. Therefore, while syllable rate is used in perceiving speech rate, other factors, such as fast speech processes, mean F0, and intensity, must also influence rate normalization. -
Severijnen, G. G. A., Di Dona, G., Bosker, H. R., & McQueen, J. M. (2023). Tracking talker-specific cues to lexical stress: Evidence from perceptual learning. Journal of Experimental Psychology: Human Perception and Performance, 49(4), 549-565. doi:10.1037/xhp0001105.
Abstract
When recognizing spoken words, listeners are confronted by variability in the speech signal caused by talker differences. Previous research has focused on segmental talker variability; less is known about how suprasegmental variability is handled. Here we investigated the use of perceptual learning to deal with between-talker differences in lexical stress. Two groups of participants heard Dutch minimal stress pairs (e.g., VOORnaam vs. voorNAAM, “first name” vs. “respectable”) spoken by two male talkers. Group 1 heard Talker 1 use only F0 to signal stress (intensity and duration values were ambiguous), while Talker 2 used only intensity (F0 and duration were ambiguous). Group 2 heard the reverse talker-cue mappings. After training, participants were tested on words from both talkers containing conflicting stress cues (“mixed items”; e.g., one spoken by Talker 1 with F0 signaling initial stress and intensity signaling final stress). We found that listeners used previously learned information about which talker used which cue to interpret the mixed items. For example, the mixed item described above tended to be interpreted as having initial stress by Group 1 but as having final stress by Group 2. This demonstrates that listeners learn how individual talkers signal stress and use that knowledge in spoken-word recognition.Additional information
XHP-2022-2184_Supplemental_materials_xhp0001105.docx -
Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2023). Delta-band neural responses to individual words are modulated by sentence processing. The Journal of Neuroscience, 43(26), 4867-4883. doi:10.1523/JNEUROSCI.0964-22.2023.
Abstract
To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step towards understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory- and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition around 100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study speak to how the neural representation of words is affected by structural context, and as such provide insight into how the brain instantiates compositionality in language. -
Tkalcec, A., Bierlein, M., Seeger‐Schneider, G., Walitza, S., Jenny, B., Menks, W. M., Felhbaum, L. V., Borbas, R., Cole, D. M., Raschle, N., Herbrecht, E., Stadler, C., & Cubillo, A. (2023). Empathy deficits, callous‐unemotional traits and structural underpinnings in autism spectrum disorder and conduct disorder youth. Autism Research, 16(10), 1946-1962. doi:10.1002/aur.2993.
Abstract
Distinct empathy deficits are often described in patients with conduct disorder (CD) and autism spectrum disorder (ASD) yet their neural underpinnings and the influence of comorbid Callous-Unemotional (CU) traits are unclear. This study compares the cognitive (CE) and affective empathy (AE) abilities of youth with CD and ASD, their potential neuroanatomical correlates, and the influence of CU traits on empathy. Adolescents and parents/caregivers completed empathy questionnaires (N = 148 adolescents, mean age = 15.16 years) and T1 weighted images were obtained from a subsample (N = 130). Group differences in empathy and the influence of CU traits were investigated using Bayesian analyses and Voxel-Based Morphometry with Threshold-Free Cluster Enhancement focusing on regions involved in AE (insula, amygdala, inferior frontal gyrus and cingulate cortex) and CE processes (ventromedial prefrontal cortex, temporoparietal junction, superior temporal gyrus, and precuneus). The ASD group showed lower parent-reported AE and CE scores and lower self-reported CE scores while the CD group showed lower parent-reported CE scores than controls. When accounting for the influence of CU traits no AE deficits in ASD and CE deficits in CD were found, but CE deficits in ASD remained. Across all participants, CU traits were negatively associated with gray matter volumes in anterior cingulate which extends into the mid cingulate, ventromedial prefrontal cortex, and precuneus. Thus, although co-occurring CU traits have been linked to global empathy deficits in reports and underlying brain structures, its influence on empathy aspects might be disorder-specific. Investigating the subdimensions of empathy may therefore help to identify disorder-specific empathy deficits. -
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. In R. Skarnitzl, & J. Volín (
Eds. ), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 96-100). Prague: Guarant International.Abstract
Over the course of a conversation, interlocutors sound more and more like each other in a process called convergence. However, the automaticity and grain size of convergence are not well established. This study therefore examined whether female native Dutch speakers converge to large yet sub-phonemic shifts in the F2 of the vowel /e/. Participants first performed a short reading task to establish baseline F2s for the vowel /e/, then shadowed 120 target words (alongside 360 fillers) which contained one instance of a manipulated vowel /e/ where the F2 had been shifted down to that of the vowel /ø/. Consistent exposure to large (sub-phonemic) downward shifts in F2 did not result in convergence. The results raise issues for theories which view convergence as a product of automatic integration between perception and production. -
van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.
Abstract
Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.Additional information
supplementary information -
van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.
Abstract
The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work. -
Zormpa, E., Meyer, A. S., & Brehm, L. (2023). In conversation, answers are remembered better than the questions themselves. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(12), 1971-1988. doi:10.1037/xlm0001292.
Abstract
Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question–answer pairs, a naturalistic device for focusing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question–answer exchange between two people about the locations of two of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people’s memory for conversation is modulated by the referential status of the items mentioned and by the speaker’s roles of the conversation participants. -
Acheson, D. J., Ganushchak, L. Y., Christoffels, I. K., & Hagoort, P. (2012). Conflict monitoring in speech production: Physiological evidence from bilingual picture naming. Brain and Language, 123, 131 -136. doi:10.1016/j.bandl.2012.08.008.
Abstract
Self-monitoring in production is critical to correct performance, and recent accounts suggest that such monitoring may occur via the detection of response conflict. The error-related negativity (ERN) is a response-locked event-related potential (ERP) that is sensitive to response conflict. The present study examines whether response conflict is detected in production by exploring a situation where multiple outputs are activated: the bilingual naming of form-related equivalents (i.e. cognates). ERPs were recorded while German-Dutch bilinguals named pictures in their first and second languages. Although cognates were named faster than non-cognates, response conflict was evident in the form of a larger ERN-like response for cognates and adaptation effects on naming, as the magnitude of cognate facilitation was smaller following the naming of cognates. Given that signals of response conflict are present during correct naming, the present results suggest that such conflict may serve as a reliable signal for monitoring in speech production. -
Benders, T., Escudero, P., & Sjerps, M. J. (2012). The interrelation between acoustic context effects and available response categories in speech sound categorization. Journal of the Acoustical Society of America, 131, 3079-3087. doi:10.1121/1.3688512.
Abstract
In an investigation of contextual influences on sound categorization, 64 Peruvian Spanish listeners categorized vowels on an /i/ to /e/ continuum. First, to measure the influence of the stimulus range (broad acoustic context) and the preceding stimuli (local acoustic context), listeners were presented with different subsets of the Spanish /i/-/e/ continuum in separate blocks. Second, the influence of the number of response categories was measured by presenting half of the participants with /i/ and /e/ as responses, and the other half with /i/, /e/, /a/, /o/, and /u/. The results showed that the perceptual category boundary between /i/ and /e/ shifted depending on the stimulus range and that the formant values of locally preceding items had a contrastive influence. Categorization was less susceptible to broad and local acoustic context effects, however, when listeners were presented with five rather than two response options. Vowel categorization depends not only on the acoustic properties of the target stimulus, but also on its broad and local acoustic context. The influence of such context is in turn affected by the number of internal referents that are available to the listener in a task. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571. doi:10.1080/01690965.2011.555268.
Abstract
Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter’’ for ‘‘computer’’) modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer’’), a competitor similar to the canonical form (e.g., ‘‘companion’’), one similar to the reduced form (e.g.,
‘‘pupil’’), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed
their attention to a similar degree to both competitors independent of the
target’s spoken form. In Experiment 2, we excluded reduced forms and
presented canonical forms only. In such a listening situation, participants
showed a clear preference for the ‘‘canonical form’’ competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude
that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations. Quarterly Journal of Experimental Psychology, 65, 2193-2220. doi:10.1080/17470218.2012.693109.
Abstract
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts. -
Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2012). From gr8 to great: Lexical access to SMS shortcuts. Frontiers in Psychology, 3, 150. doi:10.3389/fpsyg.2012.00150.
Abstract
Many contemporary texts include shortcuts, such as cu or phones4u. The aim of this study was to investigate how the meanings of shortcuts are retrieved. A primed lexical decision paradigm was used with shortcuts and the corresponding words as primes. The target word was associatively related to the meaning of the whole prime (cu/see you – goodbye), to a component of the prime (cu/see you – look), or unrelated to the prime. In Experiment 1, primes were presented for 57 ms. For both word and shortcut primes, responses were faster to targets preceded by whole-related than by unrelated primes. No priming from component-related primes was found. In Experiment 2, the prime duration was 1000 ms. The priming effect seen in Experiment 1 was replicated. Additionally, there was priming from component-related word primes, but not from component-related shortcut primes. These results indicate that the meanings of shortcuts can be retrieved without translating them first into corresponding words. -
Haderlein, T., Moers, C., Möbius, B., & Nöth, E. (2012). Automatic rating of hoarseness by text-based cepstral and prosodic evaluation. In P. Sojka, A. Horák, I. Kopecek, & K. Pala (
Eds. ), Proceedings of the 15th International Conference on Text, Speech and Dialogue (TSD 2012) (pp. 573-580). Heidelberg: Springer.Abstract
The standard for the analysis of distorted voices is perceptual rating of read-out texts or spontaneous speech. Automatic voice evaluation, however, is usually done on stable sections of sustained vowels. In this paper, text-based and established vowel-based analysis are compared with respect to their ability to measure hoarseness and its subclasses. 73 hoarse patients (48.3±16.8 years) uttered the vowel /e/ and read the German version of the text “The North Wind and the Sun”. Five speech therapists and physicians rated roughness, breathiness, and hoarseness according to the German RBH evaluation scheme. The best human-machine correlations were obtained for measures based on the Cepstral Peak Prominence (CPP; up to |r | = 0.73). Support Vector Regression (SVR) on CPP-based measures and prosodic features improved the results further to r ≈0.8 and confirmed that automatic voice evaluation should be performed on a text recording. -
Hanulikova, A., Dediu, D., Fang, Z., Basnakova, J., & Huettig, F. (2012). Individual differences in the acquisition of a complex L2 phonology: A training study. Language Learning, 62(Supplement S2), 79-109. doi:10.1111/j.1467-9922.2012.00707.x.
Abstract
Many learners of a foreign language (L2) struggle to correctly pronounce newly-learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of the L2 acquisition can inform us about L2 learning in general and individual differences in particular. To this end, adult Dutch native speakers were trained on Slovak words with complex consonant clusters (e.g., pstruh /pstrux/‘trout’, štvrť /ʃtvrc/ ‘quarter’) using auditory and orthographic input. In the same session following training, participants were tested on a battery of L2 perception and production tasks. The battery of L2 tests was repeated twice more with one week between each session. In the first session, an additional battery of control tests was used to test participants’ native language (L1) skills. Overall, in line with some previous research, participants showed only weak learning effects across the L2 perception tasks. However, there were considerable individual differences across all L2 tasks, which remained stable across sessions. Only two participants showed overall high L2 production performance that fell within 2 standard deviations of the mean ratings obtained for an L1 speaker. The mispronunciation detection task was the only perception task which significantly predicted production performance in the final session. We conclude by discussing several recommendations for future L2 learning studies. -
Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.
Abstract
The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical. -
Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.
Abstract
In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech. -
Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.
Abstract
We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning. -
Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.
Abstract
Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation. -
Konopka, A. E. (2012). Planning ahead: How recent experience with structures and words changes the scope of linguistic planning. Journal of Memory and Language, 66, 143-162. doi:10.1016/j.jml.2011.08.003.
Abstract
The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation. -
Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.
Abstract
The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.Additional information
Lesage_Suppl_Information.pdf -
Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.
Abstract
Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers. -
McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.
Abstract
An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes. -
Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.
Abstract
Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast. -
Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.
Abstract
We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates. -
Roberts, L., & Meyer, A. S. (
Eds. ). (2012). Individual differences in second language acquisition [Special Issue]. Language Learning, 62(Supplement S2). -
Roberts, L., & Meyer, A. S. (2012). Individual differences in second language learning: Introduction. Language Learning, 62(Supplement S2), 1-4. doi:10.1111/j.1467-9922.2012.00703.x.
Abstract
First paragraph: The topic of the workshop from which this volume comes, “Individual Differences in Second Language Learning,” is timely and important for both practical and theoretical reasons. The practical reasons are obvious: While many people have some knowledge of a second or further language, there is enormous variability in how well they know these languages. Much of this variability is, of course, likely to be due to differences in the time spent studying or being immersed in the language, but even in similar learning environments learners differ greatly in how quickly they pick up a language and in their ultimate level of proficiency. -
Shao, Z., Roelofs, A., & Meyer, A. S. (2012). Sources of individual differences in the speed of naming objects and actions: The contribution of executive control. Quarterly Journal of Experimental Psychology, 65, 1927-1944. doi:10.1080/17470218.2012.670252.
Abstract
We examined the contribution of executive control to individual differences in response time (RT) for naming objects and actions. Following Miyake, Friedman, Emerson, Witzki, Howerter, and Wager (2000), executive control was assumed to include updating, shifting, and inhibiting abilities, which were assessed using operation-span, task switching, and stop-signal tasks, respectively. Study 1 showed that updating ability was significantly correlated with the mean RT of action naming, but not of object naming. This finding was replicated in Study 2 using a larger stimulus set. Inhibiting ability was significantly correlated with the mean RT of both action and object naming, whereas shifting ability was not correlated with the mean naming RTs. Ex-Gaussian analyses of the RT distributions revealed that updating ability was correlated with the distribution tail of both action and object naming, whereas inhibiting ability was correlated with the leading edge of the distribution for action naming and the tail for object naming. Shifting ability provided no independent contribution. These results indicate that the executive control abilities of updating and inhibiting contribute to the speed of naming objects and actions, although there are differences in the way and extent these abilities are involved. -
Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2012). Hemispheric differences in the effects of context on vowel perception. Brain and Language, 120, 401-405. doi:10.1016/j.bandl.2011.12.012.
Abstract
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts. -
Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Extrinsic normalization for vocal tracts depends on the signal, not on attention. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 394-397).
Abstract
When perceiving vowels, listeners adjust to speaker-specific vocal-tract characteristics (such as F1) through "extrinsic vowel normalization". This effect is observed as a shift in the location of categorization boundaries of vowel continua. Similar effects have been found with non-speech. Non-speech materials, however, have consistently led to smaller effect-sizes, perhaps because of a lack of attention to non-speech. The present study investigated this possibility. Non-speech materials that had previously been shown to elicit reduced normalization effects were tested again, with the addition of an attention manipulation. The results show that increased attention does not lead to increased normalization effects, suggesting that vowel normalization is mainly determined by bottom-up signal characteristics.
Share this page