Displaying 1 - 34 of 34
-
Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.
Abstract
The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension. -
Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4: 416. doi:10.3389/fpsyg.2013.00416.
Abstract
A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions. -
Hagoort, P., & Poeppel, D. (2013). The infrastructure of the language-ready brain. In M. A. Arbib (
Ed. ), Language, music, and the brain: A mysterious relationship (pp. 233-255). Cambridge, MA: MIT Press.Abstract
This chapter sketches in very general terms the cognitive architecture of both language comprehension and production, as well as the neurobiological infrastructure that makes the human brain ready for language. Focus is on spoken language, since that compares most directly to processing music. It is worth bearing in mind that humans can also interface with language as a cognitive system using sign and text (visual) as well as Braille (tactile); that is to say, the system can connect with input/output processes in any sensory modality. Language processing consists of a complex and nested set of subroutines to get from sound to meaning (in comprehension) or meaning to sound (in production), with remarkable speed and accuracy. The fi rst section outlines a selection of the major constituent operations, from fractionating the input into manageable units to combining and unifying information in the construction of meaning. The next section addresses the neurobiological infrastructure hypothesized to form the basis for language processing. Principal insights are summarized by building on the notion of “brain networks” for speech–sound processing, syntactic processing, and the construction of meaning, bearing in mind that such a neat three-way subdivision overlooks important overlap and shared mechanisms in the neural architecture subserving language processing. Finally, in keeping with the spirit of the volume, some possible relations are highlighted between language and music that arise from the infrastructure developed here. Our characterization of language and its neurobiological foundations is necessarily selective and brief. Our aim is to identify for the reader critical questions that require an answer to have a plausible cognitive neuroscience of language processing. -
Hagoort, P., & Meyer, A. S. (2013). What belongs together goes together: the speaker-hearer perspective. A commentary on MacDonald's PDC account. Frontiers in Psychology, 4: 228. doi:10.3389/fpsyg.2013.00228.
Abstract
First paragraph:
MacDonald (2013) proposes that distributional properties of language and processing biases in language comprehension can to a large extent be attributed to consequences of the language production process. In essence, the account is derived from the principle of least effort that was formulated by Zipf, among others (Zipf, 1949; Levelt, 2013). However, in Zipf's view the outcome of the least effort principle was a compromise between least effort for the speaker and least effort for the listener, whereas MacDonald puts most of the burden on the production process. -
Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message -
Kooijman, V., Junge, C., Johnson, E. K., Hagoort, P., & Cutler, A. (2013). Predictive brain signals of linguistic development. Frontiers in Psychology, 4: 25. doi:10.3389/fpsyg.2013.00025.
Abstract
The ability to extract word forms from continuous speech is a prerequisite for constructing a vocabulary and emerges in the first year of life. Electrophysiological (ERP) studies of speech segmentation by 9- to 12-month-old listeners in several languages have found a left-localized negativity linked to word onset as a marker of word detection. We report an ERP study showing significant evidence of speech segmentation in Dutch-learning 7-month-olds. In contrast to the left-localized negative effect reported with older infants, the observed overall mean effect had a positive polarity. Inspection of individual results revealed two participant sub-groups: a majority showing a positive-going response, and a minority showing the left negativity observed in older age groups. We retested participants at age three, on vocabulary comprehension and word and sentence production. On every test, children who at 7 months had shown the negativity associated with segmentation of words from speech outperformed those who had produced positive-going brain responses to the same input. The earlier that infants show the left-localized brain responses typically indicating detection of words in speech, the better their early childhood language skills. -
Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.
Abstract
In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.Additional information
Kirstensen_Cer_Cor_Suppl_Mat.doc -
Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.
Abstract
Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper. -
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.Abstract
In everyday communication, people not only use speech but
also hand gestures to convey information. One intriguing
question in gesture research has been why gestures take the
specific form they do. Previous research has identified the
speaker-gesturer’s communicative intent as one factor
shaping the form of iconic gestures. Here we investigate
whether communicative intent also shapes the form of
pointing gestures. In an experimental setting, twenty-four
participants produced pointing gestures identifying a referent
for an addressee. The communicative intent of the speakergesturer
was manipulated by varying the informativeness of
the pointing gesture. A second independent variable was the
presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.Additional information
http://mindmodeling.org/cogsci2013/papers/0219/index.html -
Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.
Abstract
Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.Additional information
Segaert_Supplementary_data_2013.docx -
Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.
Abstract
Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects. -
Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.
Abstract
Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.Additional information
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1303170110/-/DCSupplemental -
Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (
Ed. ), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.Abstract
A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains. -
Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.
Abstract
Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia. -
Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.
Abstract
Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system. -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.
Abstract
Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words. -
Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.
Abstract
Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning. -
Baggio, G., & Hagoort, P. (2011). The balance between memory and unification in semantics: A dynamic account of the N400. Language and Cognitive Processes, 26, 1338-1367. doi:10.1080/01690965.2010.542671.
Abstract
At least three cognitive brain components are necessary in order for us to be able to produce and comprehend language: a Memory repository for the lexicon, a Unification buffer where lexical information is combined into novel structures, and a Control apparatus presiding over executive function in language. Here we describe the brain networks that support Memory and Unification in semantics. A dynamic account of their interactions is presented, in which a balance between the two components is sought at each word-processing step. We use the theory to provide an explanation of the N400 effect. -
Davids, N., Segers, E., Van den Brink, D., Mitterer, H., van Balkom, H., Hagoort, P., & Verhoeven, L. (2011). The nature of auditory discrimination problems in children with specific language impairment: An MMN study. Neuropsychologia, 49, 19-28. doi:10.1016/j.neuropsychologia.2010.11.001.
Abstract
Many children with Specific Language Impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied by using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in an active behavioral task and a passive ERP paradigm, known to elicit the mismatch negativity (MMN). In addition, attention skills and a variety of language skills were measured. Participants were 25 five-year-old Dutch children with SLI having receptive as well as productive language problems and 25 control children with typical speechand language development. At the behavioral level, the SLI group was impaired in discriminating the linguistic contrast as compared to the control group, while both groups were unable to distinguish the non-linguistic contrast. Moreover, the SLI group tended to have impaired attention skills which correlated with performance on most of the language tests. At the neural level, the SLI group, in contrast to the control group, did not show an MMN in response to either the linguistic or nonlinguistic contrast. The MMN data are consistent with an account that relates the symptoms in children with SLI to non-speech processing difficulties. -
Folia, V., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2011). Implicit artificial syntax processing: Genes, preference, and bounded recursion. Biolinguistics, 5(1/2), 105-132.
Abstract
The first objective of this study was to compare the brain network engaged by preference classification and the standard grammaticality classification after implicit artificial syntax acquisition by re-analyzing previously reported event-related fMRI data. The results show that preference and grammaticality classification engage virtually identical brain networks, including Broca’s region, consistent with previous behavioral findings. Moreover, the results showed that the effects related to artificial syntax in Broca’s region were essentially the same when masked with variability related to natural syntax processing in the same participants. The second objective was to explore CNTNAP2-related effects in implicit artificial syntax learning by analyzing behavioral and event-related fMRI data from a subsample. The CNTNAP2 gene has been linked to specific language impairment and is controlled by the FOXP2 transcription factor. CNTNAP2 is expressed in language related brain networks in the developing human brain and the FOXP2–CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language. Finally, we discuss the implication of taking natural language to be a neurobiological system in terms of bounded recursion and suggest that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner. -
Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.
Abstract
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context. -
Hagoort, P. (2011). The binding problem for language, and its consequences for the neurocognition of comprehension. In E. A. Gibson, & N. J. Pearlmutter (
Eds. ), The processing and acquisition of reference (pp. 403-436). Cambridge, MA: MIT Press. -
Hagoort, P. (2011). The neuronal infrastructure for unification at multiple levels. In G. Gaskell, & P. Zwitserlood (
Eds. ), Lexical representation: A multidisciplinary approach (pp. 231-242). Berlin: De Gruyter Mouton. -
Lai, V. T., Hagoort, P., & Casasanto, D. (2011). Affective and non-affective meaning in words and pictures. In L. Carlson, C. Holscher, & T. Shipley (
Eds. ), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 390-395). Austin, TX: Cognitive Science Society. -
Menenti, L., Gierhan, S., Segaert, K., & Hagoort, P. (2011). Shared language: Overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22, 1173-1182. doi:10.1177/0956797611418347.
Abstract
Whether the brain’s speech-production system is also involved in speech comprehension is a topic of much debate. Research has focused on whether motor areas are involved in listening, but overlap between speaking and listening might occur not only at primary sensory and motor levels, but also at linguistic levels (where semantic, lexical, and syntactic processes occur). Using functional MRI adaptation during speech comprehension and production, we found that the brain areas involved in semantic, lexical, and syntactic processing are mostly the same for speaking and for listening. Effects of primary processing load (indicative of sensory and motor processes) overlapped in auditory cortex and left inferior frontal cortex, but not in motor cortex, where processing load affected activity only in speaking. These results indicate that the linguistic parts of the language system are used for both speaking and listening, but that the motor system does not seem to provide a crucial contribution to listening.Additional information
Menenti_Suppl_Info_DS_10.1177_0956797611418347.zip -
Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2011). Reasoning with exceptions: An event-related brain potentials study. Journal of Cognitive Neuroscience, 23, 471-480. doi:10.1162/jocn.2009.21360.
Abstract
Defeasible inferences are inferences that can be revised in the light of new information. Although defeasible inferences are pervasive in everyday communication, little is known about how and when they are processed by the brain. This study examined the electrophysiological signature of defeasible reasoning using a modified version of the suppression task. Participants were presented with conditional inferences (of the type “if p, then q; p, therefore q”) that were preceded by a congruent or a disabling context. The disabling context contained a possible exception or precondition that prevented people from drawing the conclusion. Acceptability of the conclusion was indeed lower in the disabling condition compared to the congruent condition. Further, we found a large sustained negativity at the conclusion of the disabling condition relative to the congruent condition, which started around 250 msec and was persistent throughout the entire epoch. Possible accounts for the observed effect are discussed. -
Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.
Abstract
Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.Additional information
mmc1.pdf -
Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.
Abstract
Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.Additional information
Segaert_2011_Supporting_Info.doc -
Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
-
Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.
Abstract
In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.Files private
Request files -
Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.
Abstract
Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia. -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.
Abstract
To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males. -
Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.
Abstract
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. -
Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.
Abstract
A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctionalAdditional information
Willems_2011_Neuropsychologia_suppl_material.doc
Share this page