Displaying 1 - 48 of 48
-
Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.
Abstract
The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension. -
Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4: 416. doi:10.3389/fpsyg.2013.00416.
Abstract
A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions. -
Hagoort, P., & Poeppel, D. (2013). The infrastructure of the language-ready brain. In M. A. Arbib (
Ed. ), Language, music, and the brain: A mysterious relationship (pp. 233-255). Cambridge, MA: MIT Press.Abstract
This chapter sketches in very general terms the cognitive architecture of both language comprehension and production, as well as the neurobiological infrastructure that makes the human brain ready for language. Focus is on spoken language, since that compares most directly to processing music. It is worth bearing in mind that humans can also interface with language as a cognitive system using sign and text (visual) as well as Braille (tactile); that is to say, the system can connect with input/output processes in any sensory modality. Language processing consists of a complex and nested set of subroutines to get from sound to meaning (in comprehension) or meaning to sound (in production), with remarkable speed and accuracy. The fi rst section outlines a selection of the major constituent operations, from fractionating the input into manageable units to combining and unifying information in the construction of meaning. The next section addresses the neurobiological infrastructure hypothesized to form the basis for language processing. Principal insights are summarized by building on the notion of “brain networks” for speech–sound processing, syntactic processing, and the construction of meaning, bearing in mind that such a neat three-way subdivision overlooks important overlap and shared mechanisms in the neural architecture subserving language processing. Finally, in keeping with the spirit of the volume, some possible relations are highlighted between language and music that arise from the infrastructure developed here. Our characterization of language and its neurobiological foundations is necessarily selective and brief. Our aim is to identify for the reader critical questions that require an answer to have a plausible cognitive neuroscience of language processing. -
Hagoort, P., & Meyer, A. S. (2013). What belongs together goes together: the speaker-hearer perspective. A commentary on MacDonald's PDC account. Frontiers in Psychology, 4: 228. doi:10.3389/fpsyg.2013.00228.
Abstract
First paragraph:
MacDonald (2013) proposes that distributional properties of language and processing biases in language comprehension can to a large extent be attributed to consequences of the language production process. In essence, the account is derived from the principle of least effort that was formulated by Zipf, among others (Zipf, 1949; Levelt, 2013). However, in Zipf's view the outcome of the least effort principle was a compromise between least effort for the speaker and least effort for the listener, whereas MacDonald puts most of the burden on the production process. -
Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message -
Kooijman, V., Junge, C., Johnson, E. K., Hagoort, P., & Cutler, A. (2013). Predictive brain signals of linguistic development. Frontiers in Psychology, 4: 25. doi:10.3389/fpsyg.2013.00025.
Abstract
The ability to extract word forms from continuous speech is a prerequisite for constructing a vocabulary and emerges in the first year of life. Electrophysiological (ERP) studies of speech segmentation by 9- to 12-month-old listeners in several languages have found a left-localized negativity linked to word onset as a marker of word detection. We report an ERP study showing significant evidence of speech segmentation in Dutch-learning 7-month-olds. In contrast to the left-localized negative effect reported with older infants, the observed overall mean effect had a positive polarity. Inspection of individual results revealed two participant sub-groups: a majority showing a positive-going response, and a minority showing the left negativity observed in older age groups. We retested participants at age three, on vocabulary comprehension and word and sentence production. On every test, children who at 7 months had shown the negativity associated with segmentation of words from speech outperformed those who had produced positive-going brain responses to the same input. The earlier that infants show the left-localized brain responses typically indicating detection of words in speech, the better their early childhood language skills. -
Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.
Abstract
In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.Additional information
Kirstensen_Cer_Cor_Suppl_Mat.doc -
Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.
Abstract
Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper. -
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.Abstract
In everyday communication, people not only use speech but
also hand gestures to convey information. One intriguing
question in gesture research has been why gestures take the
specific form they do. Previous research has identified the
speaker-gesturer’s communicative intent as one factor
shaping the form of iconic gestures. Here we investigate
whether communicative intent also shapes the form of
pointing gestures. In an experimental setting, twenty-four
participants produced pointing gestures identifying a referent
for an addressee. The communicative intent of the speakergesturer
was manipulated by varying the informativeness of
the pointing gesture. A second independent variable was the
presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.Additional information
http://mindmodeling.org/cogsci2013/papers/0219/index.html -
Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.
Abstract
Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.Additional information
Segaert_Supplementary_data_2013.docx -
Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.
Abstract
Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects. -
Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.
Abstract
Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.Additional information
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1303170110/-/DCSupplemental -
Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (
Ed. ), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.Abstract
A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains. -
Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.
Abstract
Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia. -
Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.
Abstract
Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system. -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.
Abstract
Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words. -
Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.
Abstract
Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning. -
Acheson, D. J., Ganushchak, L. Y., Christoffels, I. K., & Hagoort, P. (2012). Conflict monitoring in speech production: Physiological evidence from bilingual picture naming. Brain and Language, 123, 131 -136. doi:10.1016/j.bandl.2012.08.008.
Abstract
Self-monitoring in production is critical to correct performance, and recent accounts suggest that such monitoring may occur via the detection of response conflict. The error-related negativity (ERN) is a response-locked event-related potential (ERP) that is sensitive to response conflict. The present study examines whether response conflict is detected in production by exploring a situation where multiple outputs are activated: the bilingual naming of form-related equivalents (i.e. cognates). ERPs were recorded while German-Dutch bilinguals named pictures in their first and second languages. Although cognates were named faster than non-cognates, response conflict was evident in the form of a larger ERN-like response for cognates and adaptation effects on naming, as the magnitude of cognate facilitation was smaller following the naming of cognates. Given that signals of response conflict are present during correct naming, the present results suggest that such conflict may serve as a reliable signal for monitoring in speech production. -
Adank, P., Noordzij, M. L., & Hagoort, P. (2012). The role of planum temporale in processing accent variation in spoken language comprehension. Human Brain Mapping, 33, 360-372. doi:10.1002/hbm.21218.
Abstract
A repetition-suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a novel accent of Dutch. Each speaker produced sentences in both accents. Participants listened to two sentences presented in quick succession while their haemodynamic responses were recorded in an MR scanner. The first sentence was spoken in Standard Dutch; the second was spoken by the same or a different speaker and produced in Standard Dutch or in the artificial accent. This design made it possible to identify neural responses to a switch in speaker and accent independently. A switch in accent was associated with activations in predominantly left-lateralized areas including posterior temporal regions, including superior temporal gyrus, planum temporale (PT), and supramarginal gyrus, as well as in frontal regions, including left pars opercularis of the inferior frontal gyrus (IFG). A switch in speaker recruited a predominantly right-lateralized network, including middle frontal gyrus and prenuneus. It is concluded that posterior temporal areas, including PT, and frontal areas, including IFG, are involved in processing accent variation in spoken sentence comprehension -
Adank, P., Davis, M. H., & Hagoort, P. (2012). Neural dissociation in processing noise and accent in spoken language comprehension. Neuropsychologia, 50, 77-84. doi:10.1016/j.neuropsychologia.2011.10.024.
Abstract
We investigated how two distortions of the speech signal–added background noise and speech in an unfamiliar accent - affect comprehension of speech using functional Magnetic Resonance Imaging (fMRI). Listeners performed a speeded sentence verification task for speech in quiet in Standard Dutch, in Standard Dutch with added background noise, and for speech in an unfamiliar accent of Dutch. The behavioural results showed slower responses for both types of distortion compared to clear speech, and no difference between the two distortions. The neuroimaging results showed that, compared to clear speech, processing noise resulted in more activity bilaterally in Inferior Frontal Gyrus, Frontal Operculum, while processing accented speech recruited an area in left Superior Temporal Gyrus/Sulcus. It is concluded that the neural bases for processing different distortions of the speech signal dissociate. It is suggested that current models of the cortical organisation of speech are updated to specifically associate bilateral inferior frontal areas with processing external distortions (e.g., background noise) and left temporal areas with speaker-related distortions (e.g., accents).Additional information
Adank_2012_Suppl_Info.doc -
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). Language, linguistics and cognition. In R. Kempson, T. Fernando, & N. Asher (
Eds. ), Philosophy of linguistics (pp. 325-356). Amsterdam: North Holland.Abstract
This chapter provides a partial overview of some currently debated issues in the cognitive science of language. We distinguish two families of problems, which we refer to as ‘language and cognition’ and ‘linguistics and cognition’. Under the first heading we present and discuss the hypothesis that language, in particular the semantics of tense and aspect, is grounded in the planning system. We emphasize the role of non-monotonic inference during language comprehension. We look at the converse issue of the role of linguistic interpretation in reasoning tasks. Under the second heading we investigate the two foremost assumptions of current linguistic methodology, namely intuitions as the only adequate empirical basis of theories of meaning and grammar and the competence-performance distinction, arguing that these are among the heaviest burdens for a truly comprehensive approach to language. Marr’s three-level scheme is proposed as an alternative methodological framework, which we apply in a review of two ERP studies on semantic processing, to the ‘binding problem’ for language, and in a conclusive set of remarks on relating theories in the cognitive science of language. -
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). The processing consequences of compositionality. In M. Werning, W. Hinzen, & E. Machery (
Eds. ), The Oxford handbook of compositionality (pp. 655-672). New York: Oxford University Press. -
Fitch, W. T., Friederici, A. D., & Hagoort, P. (
Eds. ). (2012). Pattern perception and computational complexity [Special Issue]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598). -
Fitch, W. T., Friederici, A. D., & Hagoort, P. (2012). Pattern perception and computational complexity: Introduction to the special issue. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598), 1925-1932. doi:10.1098/rstb.2012.0099.
Abstract
Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue. -
Hagoort, P. (2012). From ants to music and language [Preface]. In A. D. Patel, Music, language, and the brain [Chinese translation] (pp. 9-10). Shanghai: East China Normal University Press Ltd.
-
Hagoort, P. (2012). Het muzikale brein. Speling: Tijdschrift voor bezinning. Muziek als bron van bezieling, 64(1), 44-48.
-
Hagoort, P. (2012). Het sprekende brein. MemoRad, 17(1), 27-30.
Abstract
Geen andere soort dan homo sapiens heeft in de loop van zijn evolutionaire geschiedenis een communicatiesysteem ontwikkeld waarin een eindig aantal symbolen samen met een reeks van regels voor het combineren daarvan een oneindig aantal uitdrukkingen mogelijk maakt. Dit natuurlijke taalsysteem stelt leden van onze soort in staat gedachten een uiterlijke vorm te geven en uit te wisselen met de sociale groep en, door de uitvinding van schriftsystemen, met de gehele samenleving. Spraak en taal zijn effectieve middelen voor het behoud van sociale cohesie in samenlevingen waarvan de groepsgrootte en de complexe sociale organisatie van dien aard is dat dit niet langer kan door middel van ‘vlooien’, de wijze waarop onze genetische buren, de primaten van de oude wereld, sociale cohesie bevorderen [1,2]. -
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. In N. Miyake, D. Peebles, & R. P. Cooper (
Eds. ), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 467-472). Austin, TX: Cognitive Society. Retrieved from http://mindmodeling.org/cogsci2012/papers/0092/index.html.Abstract
Co-speech gestures are an integral part of human face-to-face communication, but little is known about how pragmatic factors influence our comprehension of those gestures. The present study investigates how different types of recipients process iconic gestures in a triadic communicative situation. Participants (N = 32) took on the role of one of two recipients in a triad and were presented with 160 video clips of an actor speaking, or speaking and gesturing. Crucially, the actor’s eye gaze was manipulated in that she alternated her gaze between the two recipients. Participants thus perceived some messages in the role of addressed recipient and some in the role of unaddressed recipient. In these roles, participants were asked to make judgements concerning the speaker’s messages. Their reaction times showed that unaddressed recipients did comprehend speaker’s gestures differently to addressees. The findings are discussed with respect to automatic and controlled processes involved in gesture comprehension. -
Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.
Abstract
Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction. -
Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.
Abstract
Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
words in continuous utterances is clearly linked to future language development. -
Kos, M., Van den Brink, D., Snijders, T. M., Rijpkema, M., Franke, B., Fernandez, G., Hagoort, P., & Whitehouse, A. (2012). CNTNAP2 and language processing in healthy individuals as measured with ERPs. PLoS One, 7(10), e46995. doi:10.1371/journal.pone.0046995.
Abstract
The genetic FOXP2-CNTNAP2 pathway has been shown to be involved in the language capacity. We investigated whether a common variant of CNTNAP2 (rs7794745) is relevant for syntactic and semantic processing in the general population by using a visual sentence processing paradigm while recording ERPs in 49 healthy adults. While both AA homozygotes and T-carriers showed a standard N400 effect to semantic anomalies, the response to subject-verb agreement violations differed across genotype groups. T-carriers displayed an anterior negativity preceding the P600 effect, whereas for the AA group only a P600 effect was observed. These results provide another piece of evidence that the neuronal architecture of the human faculty of language is shaped differently by effects that are genetically determined. -
Kos, M., Van den Brink, D., & Hagoort, P. (2012). Individual variation in the late positive complex to semantic anomalies. Frontiers in Psychology, 3, 318. doi:10.3389/fpsyg.2012.00318.
Abstract
It is well-known that, within ERP paradigms of sentence processing, semantically anomalous words elicit N400 effects. Less clear, however, is what happens after the N400. In some cases N400 effects are followed by Late Positive Complexes (LPC), whereas in other cases such effects are lacking. We investigated several factors which could affect the LPC, such as contextual constraint, inter-individual variation and working memory. Seventy-two participants read sentences containing a semantic manipulation (Whipped cream tastes sweet/anxious and creamy). Neither contextual constraint nor working memory correlated with the LPC. Inter-individual variation played a substantial role in the elicitation of the LPC with about half of the participants showing a negative response and the other half showing an LPC. This individual variation correlated with a syntactic ERP as well as an alternative semantic manipulation. In conclusion, our results show that inter-individual variation plays a large role in the elicitation of the LPC and this may account for the diversity in LPC findings in language research. -
Lai, V. T., Hagoort, P., & Casasanto, D. (2012). Affective primacy vs. cognitive primacy: Dissolving the debate. Frontiers in Psychology, 3, 243. doi:10.3389/fpsyg.2012.00243.
Abstract
When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur. -
Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.
Abstract
In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains. -
Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.
Abstract
Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics. -
Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.
Abstract
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. -
Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.
Abstract
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems. -
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Newman-Norlund, R., Hagoort, P., Levinson, S. C., & Toni, I. (2012). Exploring the cognitive infrastructure of communication. In B. Galantucci, & S. Garrod (
Eds. ), Experimental Semiotics: Studies on the emergence and evolution of human communication (pp. 51-78). Amsterdam: Benjamins.Abstract
Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.Files private
Request files -
Segaert, K., Menenti, L., Weber, K., Petersson, K. M., & Hagoort, P. (2012). Shared syntax in language production and language comprehension — An fMRI study. Cerebral Cortex, 22, 1662-1670. doi:10.1093/cercor/bhr249.
Abstract
During speaking and listening syntactic processing is a crucial step. It involves specifying syntactic relations between words in a sentence. If the production and comprehension modality share the neuronal substrate for syntactic processing then processing syntax in one modality should lead to adaptation effects in the other modality. In the present functional magnetic resonance imaging experiment, participants either overtly produced or heard descriptions of pictures. We looked for brain regions showing adaptation effects to the repetition of syntactic structures. In order to ensure that not just the same brain regions but also the same neuronal populations within these regions are involved in syntactic processing in speaking and listening, we compared syntactic adaptation effects within processing modalities (syntactic production-to-production and comprehension-to-comprehension priming) with syntactic adaptation effects between processing modalities (syntactic comprehension-to-production and production-to-comprehension priming). We found syntactic adaptation effects in left inferior frontal gyrus (Brodmann's area [BA] 45), left middle temporal gyrus (BA 21), and bilateral supplementary motor area (BA 6) which were equally strong within and between processing modalities. Thus, syntactic repetition facilitates syntactic processing in the brain within and across processing modalities to the same extent. We conclude that that the same neurobiological system seems to subserve syntactic processing in speaking and listening. -
Stein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., Toro, R., Appel, K., Bartecek, R., Bergmann, Ø., Bernard, M., Brown, A. A., Cannon, D. M., Chakravarty, M. M., Christoforou, A., Domin, M., Grimm, O., Hollinshead, M., Holmes, A. J., Homuth, G. and 184 moreStein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., Toro, R., Appel, K., Bartecek, R., Bergmann, Ø., Bernard, M., Brown, A. A., Cannon, D. M., Chakravarty, M. M., Christoforou, A., Domin, M., Grimm, O., Hollinshead, M., Holmes, A. J., Homuth, G., Hottenga, J.-J., Langan, C., Lopez, L. M., Hansell, N. K., Hwang, K. S., Kim, S., Laje, G., Lee, P. H., Liu, X., Loth, E., Lourdusamy, A., Mattingsdal, M., Mohnke, S., Maniega, S. M., Nho, K., Nugent, A. C., O'Brien, C., Papmeyer, M., Pütz, B., Ramasamy, A., Rasmussen, J., Rijpkema, M., Risacher, S. L., Roddey, J. C., Rose, E. J., Ryten, M., Shen, L., Sprooten, E., Strengman, E., Teumer, A., Trabzuni, D., Turner, J., van Eijk, K., van Erp, T. G. M., van Tol, M.-J., Wittfeld, K., Wolf, C., Woudstra, S., Aleman, A., Alhusaini, S., Almasy, L., Binder, E. B., Brohawn, D. G., Cantor, R. M., Carless, M. A., Corvin, A., Czisch, M., Curran, J. E., Davies, G., de Almeida, M. A. A., Delanty, N., Depondt, C., Duggirala, R., Dyer, T. D., Erk, S., Fagerness, J., Fox, P. T., Freimer, N. B., Gill, M., Göring, H. H. H., Hagler, D. J., Hoehn, D., Holsboer, F., Hoogman, M., Hosten, N., Jahanshad, N., Johnson, M. P., Kasperaviciute, D., Kent, J. W. J., Kochunov, P., Lancaster, J. L., Lawrie, S. M., Liewald, D. C., Mandl, R., Matarin, M., Mattheisen, M., Meisenzahl, E., Melle, I., Moses, E. K., Mühleisen, T. W., Nauck, M., Nöthen, M. M., Olvera, R. L., Pandolfo, M., Pike, G. B., Puls, R., Reinvang, I., Rentería, M. E., Rietschel, M., Roffman, J. L., Royle, N. A., Rujescu, D., Savitz, J., Schnack, H. G., Schnell, K., Seiferth, N., Smith, C., Hernández, M. C. V., Steen, V. M., den Heuvel, M. V., van der Wee, N. J., Haren, N. E. M. V., Veltman, J. A., Völzke, H., Walker, R., Westlye, L. T., Whelan, C. D., Agartz, I., Boomsma, D. I., Cavalleri, G. L., Dale, A. M., Djurovic, S., Drevets, W. C., Hagoort, P., Hall, J., Heinz, A., Clifford, R. J., Foroud, T. M., Le Hellard, S., Macciardi, F., Montgomery, G. W., Poline, J. B., Porteous, D. J., Sisodiya, S. M., Starr, J. M., Sussmann, J., Toga, A. W., Veltman, D. J., Walter, H., Weiner, M. W., EPIGEN Consortium, IMAGENConsortium, Saguenay Youth Study Group, Bis, J. C., Ikram, M. A., Smith, A. V., Gudnason, V., Tzourio, C., Vernooij, M. W., Launer, L. J., DeCarli, C., Seshadri, S., Heart, C. f., Consortium, A. R. i. G. E. (., Andreassen, O. A., Apostolova, L. G., Bastin, M. E., Blangero, J., Brunner, H. G., Buckner, R. L., Cichon, S., Coppola, G., de Zubicaray, G. I., Deary, I. J., Donohoe, G., de Geus, E. J. C., Espeseth, T., Fernández, G., Glahn, D. C., Grabe, H. J., Hardy, J., Hulshoff Pol, H. E., Jenkinson, M., Kahn, R. S., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meyer-Lindenberg, A., Morris, D. W., Müller-Myhsok, B., Nichols, T. E., Ophoff, R. A., Paus, T., Pausova, Z., Penninx, B. W., Sämann, P. G., Saykin, A. J., Schumann, G., Smoller, J. W., Wardlaw, J. M., Weale, M. E., Martin, N. G., Franke, B., Wright, M. J., Thompson, P. M., & the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) Consortium (2012). Identification of common variants associated with human hippocampal and intracranial volumes. Nature Genetics, 44, 552-561. doi:10.1038/ng.2250.
Abstract
Identifying genetic variants influencing human brain structures may reveal new biological mechanisms underlying cognition and neuropsychiatric illness. The volume of the hippocampus is a biomarker of incipient Alzheimer's disease and is reduced in schizophrenia, major depression and mesial temporal lobe epilepsy. Whereas many brain imaging phenotypes are highly heritable, identifying and replicating genetic influences has been difficult, as small effects and the high costs of magnetic resonance imaging (MRI) have led to underpowered studies. Here we report genome-wide association meta-analyses and replication for mean bilateral hippocampal, total brain and intracranial volumes from a large multinational consortium. The intergenic variant rs7294919 was associated with hippocampal volume (12q24.22; N = 21,151; P = 6.70 × 10(-16)) and the expression levels of the positional candidate gene TESC in brain tissue. Additionally, rs10784502, located within HMGA2, was associated with intracranial volume (12q14.3; N = 15,782; P = 1.12 × 10(-12)). We also identified a suggestive association with total brain volume at rs10494373 within DDR2 (1q23.3; N = 6,500; P = 5.81 × 10(-7)).Additional information
Stein_Nature_Genetics_2012_Suppl_Info.pdf -
Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2012). Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model. Cognitive Science, 36, 1078-1101. doi:10.1111/j.1551-6709.2012.01235.x.
Abstract
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing. -
Van den Brink, D., Van Berkum, J. J. A., Bastiaansen, M. C. M., Tesink, C. M. J. Y., Kos, M., Buitelaar, J. K., & Hagoort, P. (2012). Empathy matters: ERP evidence for inter-individual differences in social language processing. Social, Cognitive and Affective Neuroscience, 7, 173-182. doi:10.1093/scan/nsq094.
Abstract
When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences. -
Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.
Abstract
Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed. -
Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.
Abstract
Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access. -
Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.
Abstract
The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.Additional information
Wang_Supporting Information Figure 1.eps Wang_Supporting Information Figure 3.eps -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.
Abstract
Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion. -
Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.
Abstract
We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature. -
Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.
Abstract
Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.Additional information
Zhu_2012_suppl.dot
Share this page