Peter Hagoort

Presentations

Displaying 301 - 400 of 539
  • Hagoort, P. (2013). On speaking terms with the social brain. Talk presented at the GSSSH Distinguished Scholar - Seminar Series - Koc University. Istanbul, Turkey. 2013-12-23.
  • Hagoort, P. (2013). Ontmoet een professor [Invited lecture]. Talk presented at Montessorischool De Binnenstad, Arnhem. Arnhem. 2013.
  • Hagoort, P. (2013). Should psycholinguistics ignore the language of the brain? [Invited lecture]. Talk presented at the 26th Annual CUNY Conference on Human Sentence Processing [CUNY 2013]. Columbia, SC. 2013-03-21 - 2013-03-23.

    Abstract

    From a functionalist perspective all that brain research is claimed to have told us is that language processing "happens somewhere north of the neck" (Jerry Fodor, 1999). I will argue why I disagree with this conclusion, for at least the following three reasons. First, one fundamental question in the language sciences is: what makes the human brain language-ready? Understanding the neural architecture that supports human language function is a crucial part of the explanandum. I will show some unique features of human perisylvian cortex based on data from Diffusion Tensor Imaging and resting state fMRI. The second argument is that even if one is only interested in the cognitive architecture of language comprehension and production, relevant evidence can be obtained from neurobiological data, both structural and functional. I will discuss the consequences of connectivity patterns in the brain for assumptions in processing models of language, and I will show fMRI data based on a repetition suppression paradigm that provide evidence for the claim that syntactic encoding and parsing are based on the same mechanism. Finally, I will argue that framing theories of sentence processing in a way that connects to other areas of cognitive neuroscience might be helpful in asking interesting and relevant new questions. I will illustrate this in the context of the Memory, Unification and Control (MUC) model of language.
  • Hagoort, P. (2013). Taal en communicatie in relatie tot de ziekte van Parkinson [Invited lecture]. Talk presented at the Annual ParkinsonNet conference. Utrecht. 2013-11-29.
  • Hagoort, P. (2013). The neurobiology of language beyond the information given. Talk presented at Neuronus 2013, IBRO&IRUN Neuroscience Forum. Krakow, Poland. 2013-05-09 - 2013-05-11.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to the principle of strict compositionality, which implies that the meaning of an utterances is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. The implication of this idea is that beyond word recognition, language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step the sentence meaning is integrated with information from prior discourse, with world knowledge, with information about the speaker, and with semantic information from extralinguistic domains such as co-speech gestures or the visual world. FMRI results and results from recordings of event related brain potentials will be presented that are inconsistent with this classical model of language intepretation. Our data support a model in which knowledge about the context and the world, knowledge about concomitant information from other modalities, and knowledge about the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. Resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. Instead I will sketch a picture of language processing from an embrained perspective.
  • Hagoort, P. (2013). The neurobiology of language beyond the information given [Invited lecture]. Talk presented at the 20th Annual Meeting of the Cognitive Neuroscience Society (CNS 2013). San Francisco. 2013-04-13 - 2013-04-16.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.

    Abstract

    Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
  • Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N., & Hagoort, P. (2013). Spatiotemporal neural correlates of sentence processing using MEG. Poster presented at the 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, WA, USA.
  • Kunert, R., Willems, R. M., & Hagoort, P. (2013). How language influences your perception of music - evidence for shared syntax processing. Poster presented at the Donders Discussions, Nijmegen, The Netherlands.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2013). Shared syntactic processing mechanism in music and language: A brain imaging study. Talk presented at The biennial meeting of the Society for Music Perception and Cognition [SMPC 2013]. Toronto, Canada. 2013-08-08 - 2013-08-11.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
  • Schoffelen, J.-M., Hulten, A., Lam, N., Udden, J., & Hagoort, P. (2013). MEG source-level oscillatory activity during sentence processing. Poster presented at the 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, WA, USA.
  • Segaert, K., Weber, K., Cladder-Micus, M., & Hagoort, P. (2013). The influence of verb-specific structure preferences on the processing of syntactic structures. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2013). Decoding semantic information during internally guided word production. Poster presented at the Workshop on Objects, Concepts and Actions, Rovereto, Italy.
  • Ten Velden, J., Acheson, D. J., & Hagoort, P. (2013). Domain-specific and domain-general monitoring in speech production and non-linguistic choice reaction tasks. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Vanlangendonck, F., Willems, R. M., Menenti, L., & Hagoort, P. (2013). The role of common ground in audience design: Beyond an all or nothing story. Poster presented at the Workshop on the Production of Referring Expressions: Bridging the Gap between computational and empirical Approaches to Reference the (PRE-CogSci 2013), Berlin, Germany.
  • Acheson, D. J., Ganushchak, L. Y., Schoffelen, J.-M., & Hagoort, P. (2012). Electrophysiological responses to the semantic blocking effect in language production: A test of four hypotheses. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Acheson, D. J., Ganushchak, L. Y., Broersma, M., Carter, D. M., Christoffels, I. K., & Hagoort, P. (2012). Response conflict in language production: Electrophysiological and behavioural evidence from cognate naming. Poster presented at the 7th International Workshop on Language Production (IWOLP 2012), New York, United States.
  • Hagoort, P. (2012). Beyond the language given. Processing from an embrained perspective. Talk presented at University of Barcelona. Barcelona, Spain. 2012-03-23.
  • Hagoort, P. (2012). Das menschliche Gehirn im Fokus. Talk presented at the Nederlands-Duitse Business club. Kleve, Germany. 2012-02-06.

    Abstract

    Das wichtigste und zugleich komplexeste Organ im menschlichen Körper ist das Gehirn.
    Es ist beteiligt an allem, was uns lieb und teuer ist. Ohne Gehirn kein Gedächtnis, kein Gefühl, keine Sprache oder Wahrnehmung, so Dr. Peter Hagoort, Professor am Donders Centre for Cognition an der Radboud Universität. Der Nimweger Wissenschaftler beschäftigt sich mit den revolutionären Entwicklungen auf dem Gebiet der Scanbildtechnologie.
  • Hagoort, P. (2012). Het brein in beeld. Talk presented at the Radboud Honours Academy. Nijmegen, The Netherlands. 2012-02-21.
  • Hagoort, P. (2012). Het brein in beeld. Talk presented at Rotary. Nijmegen, The Netherlands. 2012-04-05.
  • Hagoort, P. (2012). Het brein in beeld. Talk presented at Health Valley. Nijmegen, The Netherlands. 2012-03-15.
  • Hagoort, P. (2012). Het lerende brein in beeld [Invited talk]. Talk presented at De Veluwse Onderwijsgroep. Apeldoorn, The Netherlands. 2012-10-04.
  • Hagoort, P. (2012). Het talige brein in beeld. Talk presented at Hogeschool Windesheim. Zwolle, the Netherlands. 2012-03-14.
  • Hagoort, P. (2012). The language-ready brain [Invited keynote lecture]. Talk presented at the Language and Neuroscience Conference. Universidade de Santa Catarina Florianopolis, Brazil. 2012-11-29 - 2012-12-01.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2012). Feel between the lines: Implied emotion from combinatorial language processing. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    During reading, people not only retrieve meaning from individual words, they also combine words into multi-word meaning representation and derive inference from it. In single word studies, action verb meaning (kick) is understood through the activation of motor areas, typically interpreted as showing the necessity of these sensori motor regions as part of a semantic circuit for language comprehension (Pulvermüller & Fadiga, 2010). But it remains unclear how this association-based theory scales up to understanding sentence meaning and how the semantic circuit subserves inference
    making at the sentence level.
  • Lai, V. T., Hagoort, P., & Van Berkum, J. J. A. (2012). Mood and conflict in discourse. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
  • Lai, V. T., Simanova, I., Casasanto, D., & Hagoort, P. (2012). When does context shape word meanings?. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    Words’ meanings vary with context. When do context effects arise? The answer to this is critical for deciding between theories assuming that meanings are accessed from a stable mental lexicon and theories that suggest meanings are constructed ad hoc. On the first view, a word form activates an invariant semantic representation, which is subsequently tailored to fit the context (e.g., Evans, 2009; Machery, 2010). On an alternative view, word forms are cues to construct meaning; the information that gets activated is always co-determined by the word and its context (Elman, 2004; 2009; Lai, Hagoort, & Casasanto, 2011).
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). Behavioral and neural correlates of deictic reference. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). The comprehension of exophoric reference: An ERP study. Poster presented at the Fourth Annual Neurobiology of Language Conference (NLC), San Sebastian, Spain.

    Abstract

    An important property of language is that it can be used exophorically, for instance in referring to entities in the extra-linguistic context of a conversation using demonstratives such as “this” and “that”. Despite large-scale cross-linguistic descriptions of demonstrative systems, the mechanisms underlying the comprehension of such referential acts are poorly understood. Therefore, we investigated the neural mechanisms underlying demonstrative comprehension in situated contexts. Twenty-three participants were presented on a computer screen with pictures containing a speaker and two similar objects. One of the objects was close to the speaker, whereas the other was either distal from the speaker but optically close to the participant (“sagittal orientation”), or distal from both (“lateral orientation”). The speaker pointed to one object, and participants heard sentences spoken by the speaker containing a proximal (“this”) or distal (“that”) demonstrative, and a correct or incorrect noun-label (i.e., a semantic violation). EEG was recorded continuously and time-locked to the onset of demonstratives and nouns. Semantic violations on the noun-label yielded a significant, wide-spread N400 effect, regardless of the objects’ orientation. Comparing the comprehension of proximal to distal demonstratives in the sagittal orientation yielded a similar N400 effect, both for the close and the far referent. Interestingly, no demonstrative effect was found when objects were oriented laterally. Our findings suggest a similar time-course for demonstrative and noun-label processing. However, the comprehension of demonstratives depends on the spatial orientation of potential referents, whereas noun-label comprehension does not. These findings reveal new insights about the mechanisms underlying everyday demonstrative comprehension.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2012). Effect of semantic category in temporal and spatial dynamics of brain activation. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Tsuji, S., Cristia, A., Fikkert, P., Minagawa-Kawai, Y., Hagoort, P., Seidl, A., & Dupoux, E. (2012). Six-month-olds' brains respond more to highly frequent vowels. Poster presented at the fNIRS Conference, London, UK.
  • Vanlangendonck, F., Menenti, L., & Hagoort, P. (2012). Audience design in interactive language use. Poster presented at the CITEC Summer School, Bielefeld, Germany.
  • Acheson, D. J., & Hagoort, P. (2011). Distinguishing the respective roles of the MTG and IFG in language comprehension with rTMS. Poster presented at the Third Annual Neurobiology of Language Conference (NLC 2011), Annapolis, MD.
  • Acheson, D. J., Ganushchak, L. Y., Christoffels, I. K., & Hagoort, P. (2011). The error-related negativy (ERN) as a general marker of monitoring in speech production: Evidence from the overt naming of cognates. Poster presented at the Third Annual Neurobiology of Language Conference (NLC), Annapolis, MD.
  • Acheson, D. J., & Hagoort, P. (2011). Syntactic and semantic influences on verbal short-term memory. Poster presented at the 17th Meeting of the European Society for Cognitive Psychology [ESCOP 2011], Donostia - San Sebastian, Spain.

    Abstract

    Although semantic influences on verbal short-term memory (STM) performance are well-established, substantially less research has studied the influence of syntactic representation. In the present study, syntactic and semantic factors were manipulated in order to explore how both interact to influence verbal STM. Subjects performed immediate, serial recall on lists of six Dutch words composed of three sets of adjective-noun pairs, where the nouns were either common (‘de’) or neuter (‘het’) gender. The grammaticality of the word pairs was manipulated through the morphological agreement between the adjectives and nouns (either legal of illegal), and the semantics by creating more or less meaningful word pairs (e.g., big bucket vs. grateful bucket). Syntactic and semantic factors were fully crossed within-subjects and within-items yielding a 2 (Grammatical) X 2 (Meaningful) X 2 (Noun Gender) design. Results on serial order memory accuracy revealed that both grammaticality and meaningfulness improved performance, and that the factors interacted, such that the beneficial effects of grammaticality were only present for lists of meaningful items. The present results thus demonstrate that while something as simple as morphological agreement (a long-term, syntactic constraint) can improve verbal STM performance, it only seem to do so in the presence stronger semantic constraints.
  • Acheson, D. J., & Hagoort, P. (2011). Syntactic and semantic influences on verbal short-term memory. Poster presented at the 5th International Conference on Memory, The University of York, UK.

    Abstract

    Although semantic influences on verbal short-term memory (STM) are well-documented, substantially less research has examined influences of syntactic representation. In the present study, both syntactic and semantic factors were manipulated in order to explore how each affects verbal STM. Subjects (N=20) performed immediate, serial recall on lists of six Dutch words composed of three sets of adjective-noun pairs. Lists were factorially manipulated within a 2 (Noun Gender; common vs. neuter) X 2 (Grammatical; legal vs. illegal morphological agreement) X 2 (Meaningful; more vs. less) within-subjects design. Results on serial order memory revealed significant main effects of meaningfulness and grammaticality and a meaningfulness X grammaticality interaction, whereby the effects of the grammaticality were only present for more meaningful lists. The present results demonstrate that although syntactic factors can influence verbal STM, they only seem to do so in the presence stronger semantic constraints.
  • Basnakova, J., Weber, K., Petersson, K. M., Hagoort, P., & Van Berkum, J. J. A. (2011). Understanding speaker meaning: Neural correlates of pragmatic inferencing in discourse comprehension. Poster presented at Neurobiology of Language Conference, Annapolis,MD.
  • Ganushchak, L. Y., Acheson, D. J., Christoffels, I. K., & Hagoort, P. (2011). Cognate status effects monitoring processes in speech production: Evidence from the 'error-related negativity'. Talk presented at the 17th Meeting of the European Society for Cognitive Psychology [ESCOP 2011]. Donostia - San Sebastian, Spain. 2011-09-29 - 2011-10-02.

    Abstract

    One of the physiological markers of monitoring in both
    speech and non-speech tasks is the so-called error related
    negativity (ERN), an event-related potential that is
    typically observed after error trials. However, the ERN is
    also observed after correct trials in both manual and
    verbal tasks, suggesting that it might be a more general
    marker for the monitoring of response conflict. The
    present work tests this hypothesis in speech production
    by exploring a situation where increased response conflict
    naturally occurs, namely, when multiple speech
    outputs are simultaneously activated. Event-related
    potentials were recorded while participants named
    pictures in their first and second languages. Activation of
    multiple outputs was manipulated through the form
    similarity between translation equivalents (i.e., cognate
    status). Replicating previous results, cognates were faster
    to name than non-cognates. Interestingly, responselocked
    analyses not only showed a reliable ERN on correct
    trials, but that the amplitude of the ERN was larger
    for cognates compared to non-cognates. Thus, despite
    being faster to name, cognates seem to induce more
    conflict during response monitoring. This in turn indicates
    that the ERN is not simply sensitive to conflicting motor
    responses, but also to more abstract conflict resulting
    from co-activation of multiple phonological representations.
  • Hagoort, P. (2011). Beyond the language given. Talk presented at The 3rd Workshop on Semantic Processing, Logic and Cognition [SPLC 2011]. Tübingen, Germany. 2011-07-15 - 2011-07-16.

    Abstract

    My contribution will focus on the neural infrastructure for deriving speaker meaning. Recent accounts have argued that simulation (i.e. the re-enactment of states of perception and action) suffice to realize comprehension. This will fail on theoretical grounds. But I will also show empirical evidence indicating that the Theory of Mind network kicks in when articularized conversational implicatures are at stake. In addition, I will show that markers of Information Structure trigger the operation of a general attention network in the surface of modulating the depth of processing.
  • Hagoort, P. (2011). Beyond the language given. Talk presented at In honour of WMW. Cambridge, UK. 2011-06-30.
  • Hagoort, P. (2011). Beyond the language given: Language processing from an embrained perspective. Talk presented at The CIMeC Colloquium Series. Trento University, Italy. 2011-02-18.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which implies that the meaning of an utterances is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. The implication of this idea is that beyond word recognition, language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step the sentence meaning is integrated with information from prior discourse, with world knowledge, with information about the speaker, and with semantic information from extralinguistic domains such as co-speech gestures or the visual world. FMRI results and results from recordings of event related brain potentials will be presented that are inconsistent with this classical Fregean model of language intepretation. Our data support a model in which knowledge about the context and the world, knowledge about concomitant information from other modalities, and knowledge about the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. The Memory, Unification and Control (MUC) model of language accounts for these data. Resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. Instead I will sketch a picture of language processing from an embrained perspective.
  • Hagoort, P. (2011). Broca's area and beyond: From unification to speaker meaning. Talk presented at Meeting Broca's area. Paris. 2011-11-28 - 2011-11-29.
  • Hagoort, P. (2011). Dialogues in neural space. Talk presented at The British Neuropsychological Society Spring Meeting 2011 [BNS 2011]. London. 2011-03-30 - 2011-03-31.
  • Hagoort, P. (2011). Cognitive neuroscience beyond philosophy. Talk presented at The KNAW Conference “Imaging the mind? Taking stock a decade after the “Decade of the brain”. Amsterdam, The Netherlands. 2011-04-01 - 2011-04-03.

    Abstract

    There is a school of philosophers who believe that the garden of nature should be cleaned first from the conceptual weeds by qualified philosophers, before empirical researchers should be allowed to enter the scene. I will defend a different position. This is one in which, for the case of cognitive neuroscience, knowledge on brain and cognition is strongly driven by new research tools and methods, which provide new challenges for conceptual analysis.
  • Hagoort, P. (2011). [Moderator and chair]. Symposium ‘On Consciousness’. Amsterdam Royal Palace Foundation. Amsterdam, 2011-06-16 - 2011-06-17.
  • Hagoort, P. (2011). [Program Advisory Committee]. Strüngmann Forum on “Language, Music and the Brain: A Mysterious Relationship. Frankfurt am Main, Germany, 2011-05-08 - 2011-05-13.
  • Hagoort, P. (2011). Human language system. Talk presented at NeuroSpin. Gif sur Yvette, France. 2011-07-12.
  • Hagoort, P. (2011). In conversation with our brain. Talk presented at the Netherlands Institute in Saint-Petersburg. Saint Petersburg, Russia. 2011-10-05.

    Abstract

    With more than a hundred billion neurons, and more than 100.000 kilometers of connecting wires inside our skull, the human brain is the most complex organ in the known universe. Recent developments of brain imaging techniques, allow unprecedented previews of the human brain in action. What happens in our brain when we learn, when we change our opinion, when we speak, when we experience emotion; it will all be discussed in this lecture. How increased insights into brain function will impact society will be discussed as well.

    Peter Hagoort is director of the Max Planck Instute for Psycholinguistics (since November 2006), and the founding director of the Donders Centre for Cognitive Neuroimaging (1999), a cognitive neuroscience research centre at the Radboud University Nijmegen, with participation of the Universities of Maastricht, Twente, and the Max Planck Institute for Psycholinguistics. In addition, he is professor in cognitive neuroscience at the Radboud University Nijmegen. His own research interests relate to the domain of the human language faculty and how it is instantiated in the brain. In his research he applies neuroimaging techniques such as ERP, MEG, PET and fMRI to investigate the language system and its impairments as in aphasia, dyslexia and autism. At the Donders Centre he is currently heading the research group Neurocognition of Language. At the Max Planck Institute he is heading a department on the Neurobiology of Language. For his scientific contributions, the Royal Netherlands Academy of Arts Sciences (KNAW) awarded him with the Hendrik Mullerprijs in 2003. In 2004 he was awarded by the Dutch Queen with the “Knighthood of the Dutch Lion.” In 2005 he received the NWO-Spinoza Prize. Peter Hagoort is fellow of the Royal Netherlands Academy of Arts and Sciences (KNAW).
  • Hagoort, P. (2011). Moderator and chair symposium "On consciousness". Talk presented at Amsterdam Royal Palace Foundation. Amsterdam. 2011-06-17.
  • Hagoort, P. (2011). Language processing from an embrained perspective. Talk presented at "Multidisciplinary studies of lexical processing": A workshop for William Marslen-Wilson. Cambridge, UK. 2011-06-30 - 2011-07-02.
  • Hagoort, P. (2011). The speaking brain: one decade of the brain vs 200 decades of philosophy. Talk presented at Felix Meritis. Amsterdam. 2011-04-01.
  • Simanova, I., van Gerven, M., Oostenveld, R., & Hagoort, P. (2011). Decoding semantic categories from pictures, words and natural Sounds. Poster presented at HBM 2011 - The 17th Annual Meeting of the Organization for Human Brain Mapping, Quebec City, Canada.
  • Zhu, Z., Feng, G., Hagoort, P., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2011). Connectivity within language network was modulated by language task. Poster presented at CNS 2011 - 18th Annual Meeting of the Cognitive Neuroscience Society (CNS), San Francisco, CA.

    Abstract

    Connectivity among language-related brain regions during resting state has consistently been observed in previous studies. The current study investigates whether and how this connectivity is altered by a language task. Twenty-four native Dutch speakers were asked to read sentences for comprehension (i.e., a 50 min. language comprehension task), and resting state fMRI data were collected before and after the task. In accordance with previous similar work (Xiang, Fonteijn, Norris, & Hagoort. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex, 20, 549-560.), ROIs in left BA44, BA45 and BA47 were used as seed regions. Functional connectivity (fc) of the seed regions with left parietal and temporal areas was found, in line with Xiang et al’s observations. Moreover, comparing fc's before and after the task, we found that the task altered fc patterns. After the task, for ROI BA44 and BA45, reduced connectivity with middle and posterior temporal regions as well as with the parietal lobule were found. In contrast, we 232 Cognitive Neuroscience Society — 2011 Annual Meeting Poster Session I — Tuesday, April 5, 3:00 - 5:00 pm, Pacific Concourse observed increased connectivity with medial frontal and superior frontal gyrus. For BA47, increased connectivity with anterior temporal lobe and bilateral precentral gyrus, and reduced connectivity with visual cortex were observed. Together the results suggest that language tasks modulate the resting-state connectivity within the brain's language network, in line with previous work (Waites, Stanislavsky, Abbott, & Jackson. (2005) Effect of prior cognitive state on resting state networks measured with functional connectivity. Human Brain Mapping, 24, 59-68.).
  • Basnakova, J., Weber, K., Petersson, K. M., Hagoort, P., & Van Berkum, J. J. A. (2010). Understanding speaker meaning: Neural correlates of pragmatic inferencing in language comprehension. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: Natural communication is not only literal, but to a large extent also inferential. For example, sometimes people say "It is hard to give a good presentation" to actually mean "Your talk was a mess!", and listeners need to infer the speaker’s hidden message. In spite of the pervasiveness of this phenomenon in everyday communication, and even though the hidden meaning is often what it’s all about, very little is known about how the brain supports the comprehension of indirect language. What are the neural systems involved in the inferential process , and how are they different from those involved in word- and sentence-level meaning processing? We investigated the neural correlates of this so-called pragmatic inferencing in an fMRI study involving natural spoken dialogue. Methods: As a test case, we focused on the inferences needed to understand indirect replies. 18 native listeners of Dutch listened to dialogues ending in a question-answer (QA) pair. The final and critical utterance, e.g., "It is hard to give a good presentation", had different meanings depending on the dialogue context and the immediately preceding question: (1) Direct reply: Q: "How is it to give a good presentation?" A: "It is hard to give a good presentation" (2) Indirect reply, neutral: Q: "Will you give a presentation at the conference?" (rather than a poster) A: "It is hard to give a good presentation" (3) Indirect reply, face-saving: Q: "Did you like my presentation?" A: "It is hard to give a good presentation" While one of the indirect conditions was neutral, the other involved a socio-emotional aspect, as the reason for indirectness was to 'save one’s face' (as in excuses or polite refusals). Participants were asked to pay attention to the dialogues and, to ensure the latter, occasionally received a comprehension question (on filler items only). No other task demands were imposed. Results: Relative to direct replies in exchanges like (1), the indirect replies in exchanges like (2) and (3) activated brain structures associated with theory of mind and inferencing: right angular gyrus (TPJ), right DM prefrontal / frontal cortex (SMA, ACC). Both types of indirect replies also bilaterally activated the insula, an area known to be involved in empathy and affective processing. Moreover, both types of indirect replies recruited bilateral inferior frontal gyrus, thought to play a role in situation model updating. The comparison between neutral (2) and face-saving (3) indirect replies revealed that the presumed affective load of the face-saving replies activated just one additional area: right inferior frontal gyrus; we did not see any activation in classic affect-related areas. Importantly, we used the same critical sentences in all conditions. Our results can thus not be explained by lexico-semantic or other (e.g. syntactic, word frequency) factors. Conclusions: To extend neurocognitive research on meaning in language beyond the level of straightforward literal utterances, we investigated the neural correlates of pragmatic inferencing in an fMRI study involving indirect replies in natural spoken dialogue. Our findings reveal that the areas used to infer the intended meaning of an implicit message are partly different from the classic language network. Furthermore, the identity of the areas involved is consistent with the idea that inferring hidden meanings requires taking the speaker’s perspective. This confirms the importance of perspective taking in language comprehension, even in a situation where the listener is not the one addressed. Also, as the areas recruited by indirect replies generally do not light up in standard fMRI sentence comprehension paradigms, our study testifies to the importance of studying language understanding in richer contexts in which we can tap aspects of pragmatic processing, beyond the literal code.
  • Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Theoretical accounts (e.g. Jackendoff, 2007; see also Hagoort, 2005) propose that unification operations occur in parallel at the phonological, syntactic and semantic levels. Meta-analysis of fMRI studies (Bookheimer, 2002) shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during the different types of unification operations. This raises the question of how the brain functionally segregates these different unification operations. Previously, we have established that semantic unification modulates oscillatory EEG activity in the gamma frequency range (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Hald, Bastiaansen, & Hagoort, 2005). More recently, we have shown that syntactic unification modulates MEG activity in the lower beta frequencies (13-18 Hz). Here we report a fully within-subjects replication of these findings. Methods: We recorded the EEG (64 channels, filtered from 0.1 - 100 Hz) of 30 subjects while they read sentences presented in serial visual presentation mode. Sentences were either correct (COR), contained a semantic violation (SEM), or a syntactic (grammatical gender agreement) violation (SYN). Two additional conditions were constructed on the basis of COR sentences by (1) replacing all the nouns, verbs and adjectves with semantically unrelated ones that were matched for length and frequency, making the sentences semantically ininterpretable (global semantic violation, GSEM, and (2) randomly re-assigning word order of the COR sentences, so as to remove overall syntactic structure from the sentences (global syntactic violation, GSYN). Here we only report the results of analyses on the COR, GSEM and GSYN conditions. EEG epochs from 1s preceding sentence onset to 6s after sentence onset (corresponding to the first 10 words in each sentence) were extracted from the EEG recordings, and epochs with artifacts were removed. A multitaper-based time-frequency (TF) analysis of power changes (Mitra & Pesaran, 1999) was performed, separately for a low-frequency window (1-30 Hz) and high-frequency window (25-100 Hz). Significant differences in the TF representations between any two conditions were established unsing non-parametric random permutation analysis (Maris & Oostenveld, 2007). Results: Semantic unification: gamma Figure 1 presents the comparison between the TF responses of the semantically intact condition (COR) and those of the semantically incorrect ones (GSEM, but also GSYN, since the absence of syntactic structure makes the sentence semantically uninterpretable as well). Both the COR-GSEM and the COR-GSYN contrasts show significantly larger power for the semantically correct sentences in a frequency range around 40 Hz (as well as some less consistent differences in higher frequencies). No differences were observed between GSEM and GSYN in the frequency range 25-100 Hz. Syntactic unification: beta Figure 2 presents the conparison between the TF responses of the syntactically correct conditions (COR and GSEM) and the incorrect one (GSYN). Both the COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. No significant differences were observed between COR and GSEM in the frequency range 1-30 Hz. Conclusions: During the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) slowly increase as the sentence unfolds. When a sentence is devoid of syntactic structure, the beta increase is absent. When a sentence is devoid of semantically co=herent structure, the gamma increase is absent. Together the data show a fully within-subjects confirmation of previously obtained results in separate experiments (for review, see Bastiaansen & Hagoort, 2006). This suggests that neuronal synchronization in LIFG at gamma frequencies is related to semantic unification, whereas synchronization at beta frequencies is related to syntactic unification. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations in language comprehension. References: Bastiaansen, M. (2006), 'Oscillatory neuronal dynamics during language comprehension.', Prog Brain Res, vol. 159, pp. 179-196. Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic processing', Annu Rev Neurosci, vol. 25, pp. 151-188. Hagoort, P. (2005), 'On Broca, brain, and binding: a new framework.', Trends Cogn Sci,, vol. 9, no. 9, pp. 416-423. Hagoort, p. (2004), 'Integration of word meaning and world knowledge in language comprehension', Science, vol. 304, no. 5669, pp. 438-441. Hald, L. (2005), 'EEG theta and gamma responses to semantic violations in online sentence processing', Brain & Language, vol. 96, no. 1, pp. 90-105.. Jackendoff, R. (2007), 'A Parallel Architecture perspective on language processing', Brain research, vol. 1146, pp. 2-22. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', J Neurosci Methods, vol. 164, no. 1, pp. 177-190. Mitra, P. (1999), 'Analysis of dynamic brain imaging data.', Biophys. J., vol. 76, no. 2, pp. 691-708.
  • Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Unification operations occur in parallel at the phonological, syntactic and semantic levels, and meta-analyses of fMRI studies shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during different unification operations. How does the brain functionally segregate these different operations? Previously we established that semantic unification modulates oscillatory EEG activity in the gamma frequency range, and that syntactic unification modulates MEG in the beta range. We propose that there is functional segregation of syntactic and semantic unification in LIFG based on frequency-coding. We report a within-subjects replication of the previous findings. Subjects read visually presented sentences that were either correct (COR), semantically incorrect (by replacing the nouns, verbs, adjectives of the COR sentences with semantically unrelated ones) or semantically and syntactically incorrect (by randomizing word order of the COR sentences). Time-frequency analysis of power was performed on EEG epochs corresponding to entire sentences. The COR-GSEM and the COR-GSYN contrasts show larger power for the semantically correct sentences in a frequency range around 40 Hz. . The COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. In sum, during the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) increase. When a sentence is devoid of syntactic structure, the beta increase is absent, when there is no semantic structure the gamma increase is absent. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations.
  • Folia, V., Hagoort, P., & Petersson, K. M. (2010). Broca's region: Implicit sequence learning and natural syntax processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In an event-related fMRI study, we examined the overlap between the implicit processing of structured sequences, generated by a simple right-linear artificial unification grammar, with natural syntax related variability in the same subjects. Research investigating rule learning of potential linguistic relevance through artificial syntax often uses performance feedback and/or explicit instruction concerning the underlying rules. It is assumed that this approach ensures the right type of ''rule-following''because the rules are either explicitly provided to the subjects or explicitly discovered by the subjects during trial-and-error learning with feedback. In this work, we use a novel implicit preference classification task based on the structural mere exposure effect. Under conditions that in important respects are similar to those of natural language development (i. e., no explicit learning or teaching instruction, and no performance feedback), 32 subjects were exposed for 5 days to grammatical sequences during an immediate short-term memory task. On day 5, a preference classification test was administered, in which new sequences were presented. In addition, natural language data was acquired in the same subjects. Implicit preference classification was sensitive enough to show robust behavioral and fMRI effects. Preference classification of structured sequences activated Broca's region (BA 44/45) significantly, and was further activated by artificial syntactic violations. The effects related to artificial syntax in BA 44/45 were identical when we masked these with activity related to natural syntax processing. Moreover, the medial temporal lobe was deactivated during artificial syntax processing, consistent with the view that implicit processing does not rely on declarative memory mechanisms supported by the medial temporal lobe. In summary, we show that implicit acquisition of structured sequence knowledge results in the engagement of Broca's region during structured sequence processing. We conclude that Broca's region is a generic on-line sequence processor integrating information, in an incremental and recursive manner, independent of whether the sequences processed are structured by a natural or an artificial syntax.
  • Franke, B., Rijpkema, M., Arias Vasquez, A., Veltman, J. A., Brunner, H. G., Hagoort, P., & Fernandez, G. (2010). Genome-wide association study of regional brain volume suggests involvement of known psychiatry candidate genes, identified new candidates for psychiatric disorders and points to potential modes of their action. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Though most psychiatric disorders are highly heritable, it has been hard to identify genetic risk factors involved, which are most likely of small individual effect size. A possible way to aid identification of risk genes is the use of intermediate phenotypes. These are supposed to be closer to the biological substrate(s) of the disorder than psychiatric diagnoses, and therefore less genetically complex. Intermediate phenotypes can be defined e. g. at the level of brain function and of regional brain structure. Both are highly heritable, and regional brain structure is linked to brain function. Within the Brain Imaging Genetics (BIG) study at the Radboud University Nijmegen (Medical Centre) we performed a genome-wide association study (GWAS) in 1000 of the currently 1400 healthy study participants. For all BIG participants, structural MRI brain images were available. Gray and white matter volumes were determined by brain segmentation using SPM software. FSL-FIRST was used to assess volumes of specific brain structures. Genotyping was performed on Affymetrix 6.0 arrays. Results implicate known candidates from earlier GWAS and candidate gene studies in mental disorders in the regulation of regional brain structure. E. g. polymorphisms in CDH13, featuring among the top-findings of GWAS in disorders including ADHD, addiction and schizophrenia, were found associated with amygdala volume. The ADHD candidate gene SNAP25 was found associated with total brain volume. In conclusion, the use of intermediate phenotypes based on (subcortical) brain volumes may shed more light on pathways from genes to diseases, but can also be expected to facilitate gene identification in psychiatric disorders.
  • Hagoort, P. (2010). Beyond Broca, brain, and binding. Talk presented at Symposium Marta Kutas. Nijmegen. 2010-05-19 - 2010-05-20.
  • Hagoort, P. (2010). Beyond the Language given: Language processing from an embrained perspective. Talk presented at Sissa colloquim. Trieste, Italy. 2010-12-13.
  • Hagoort, P. (2010). Breintaal. Talk presented at Club of Spinoza Prize winners. Rijnsburg, The Netherlands. 2010-12-01.
  • Hagoort, P. (2010). De talige netwerken in ons brein. Talk presented at the Wetenschappelijke Vergadering en Algemene Ledenvergadering van de Nederlandse Vereniging voor Neurologie (NVN). Amsterdam, The Netherlands. 2010-11-04 - 2010-11-04.
  • Hagoort, P. (2010). Communication beyond the language given. Talk presented at International Neuropsychological Symposium. Ischia(Italy). 2010-06-22 - 2010-06-26.
  • Hagoort, P. (2010). [Organizing committee and session chair]. Second Annual Neurobiology of Language Meeting [NCL 2010]. San Diego, CA, 2010-11-11 - 2010-11-12.
  • Hagoort, P. (2010). In gesprek met ons brein. Talk presented at Paradisolezingen 2010. Amsterdam. 2010-03-28.
  • Hagoort, P. (2011). Language processing: A disembodied perspective [Keynote lecture]. Talk presented at The Workshop Embodied & Situated Language Processing [ESLP 2010]. Bielefeld, Germany. 2011-08-25 - 2011-08-27.
  • Hagoort, P. (2010). The science of human nature. Talk presented at Anthos Conference. Noordwijk, The Netherlands. 2010-01-08.
  • Hagoort, P., Segaert, K., Weber, K. M., De Lange, F. P., & Petersson, K. M. (2010). The suppression of repetition enhancement: A review. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Repetition suppression is generally accepted as the neural correlate of behavioural priming and is often used to selectively identify the neuronal representations associated with a stimulus. However, this does not explain the large number of repetition enhancement effects observed under very similar conditions. Based on a review of a large set of studies we propose several variables biasing repetition effects towards enhancement instead of suppression. On the one hand, there are stimulus variables which influence the direction of repetition effects: visibility, e. g. in the case of degraded stimuli perceptual learning occurs; novelty, e. g. in case of unfamiliar stimuli a novel network formation process occurs; and timing intervals, e. g. repetition effects are sensitive to stimulus onset asynchronies. On the other hand, repetition effects are not solely automatic processes, triggered by particular types or sequences of stimuli. The brain is continuously and actively filtering, attending to and interpreting the information provided by our senses. Consequently, internal state variables like attention, expectation and explicit memory modulate repetition effects towards enhancement versus suppression. Current models i.e. the accumulation, fatigue and sharpening models of repetition suppression have so far left out top-down factors and cannot or can only partially account for repetition enhancement effects. Instead we propose that models which incorporate both stimulus bottom-up and cognitive top-down factors are called for in order to better understand repetition effects. A good candidate is the predictive coding model in which sensory evidence is interpreted according to subjective biases and statistical accounts of past encounters.
  • Hagoort, P. (2010). The modular ghost in the recurrent connection machine: Where is the modular mind in a brain full of recurrent connectivity?. Talk presented at The Modularity of Mind: Revisions and Prospects. Heinrich-Heine University Düsseldorf, Germany. 2010-10-29.
  • Händel, B., Van Leeuwen, T. M., Jensen, O., & Hagoort, P. (2010). Lateralization of alpha oscillations in grapheme-color synaesthetes suggests altered color processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In grapheme-color synaesthesia, the percept of a particular grapheme causes additional experiences of color. To investigate this interesting integration of modalities, brain activity was recorded of 7 synaesthetes and matched controls using magnetoencephalography. Subjects had to report the color change of one of two letters presented left and right of a fixation cross. One of the letters was neutral (eliciting no color percept), the other one could either be neutral, colored or elicit synaesthesia (in synaesthetes). Additionally, the side of color change was validly or invalidly cued. As expected, in both subject groups 10 Hz alpha oscillations decreased contralateral to the attended side leading to an alpha lateralization. Additionally, controls as well as synaesthetes showed a stronger alpha reduction if the attended letter was colored indicating that color increased the attentional allocation. Interestingly, synaesthetes show the same effect of alpha decrease for synaesthetic color. While color on the attended side reduced alpha power in controls and synaesthetes, color on the unattended side only reduced alpha power in synaesthetes. Indeed, also psychophysical measures indicated changed processing in synaesthetes of unattended color stimuli. Only controls profited from the cue when attending the noncolor stimulus. Synaesthetes, however, performed worse if the noncolor stimulus was validly compared to invalidly cued. This means that synaesthetes performed better on the colored stimulus despite an invalid attentional cue. Changed alpha power lateralization and psychophysics due to unattended colorful input indicate that synaesthetes are more affected by color than controls. This might be due to increased attentional demand.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Dynamics of early word learning in nine-month-olds: An ERP study. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    What happens in the brain when infants are learning the meaning of words? Only a few studies (Torkildsen et al., 2008; Friedrich & Friederici, 2008) addressed this question, but they focused only on novel word learning, not on the acquisition of infant first words. From behavioral research we know that 12-month-olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on (Schafer, 2005). What happens in the brain during such a training? With event-related potentials, we studied the effect of training context on word comprehension. We manipulated the type/token ratio of the training context (one versus six exemplars). 24 normal-developing Dutch nine-month-olds (+/- 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15-months-olds. All trials consisted of a high-resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training-test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture-word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13-month-olds. German 12-month-olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9-month-olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infant word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word learning in nine-month-olds: Dynamics of picture-word priming. Talk presented at 8th Sepex conference / 1st Joint conference of the EPS and SEPEX. Granada, Spain. 2010-04.

    Abstract

    How do infants learn words? Most studies focus on novel word learning to address this question. Only a few studies concentrate on the stage when infants learn their first words. Schafer (2005) showed that 12‐month‐olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on. What happens in the brain during such a training? With event‐related potentials, we studied the effect of training context on word comprehension. 24 Normal‐developing Dutch nine‐month‐olds (± 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15‐months‐olds. All trials consisted of a high‐resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training‐test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. We manipulated the type/token ratio of the training context (one versus six exemplars). Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture‐word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13‐month‐olds. German 12‐month‐olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9‐month‐olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infants’ word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability and later language development: Insight from ERP's. Talk presented at Child Language Seminar 2010. London. 2010-06-24 - 2010-06-26.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability is related to later word processing skill. Poster presented at XVIIIth Biennial International Conference on Infant Studies, Baltimore, MD.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This reference is then mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task, we distinguished sets of regions sensitive to these two steps in speaking. Encoding reference involved the bilateral inferior parietal lobes (BA 39) and right inferior frontal gyrus (BA 45), suggesting a role in constructing a non-linguistic mental representation. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) were sensitive to both sense and reference processing. These regions thus seem to support semantic encoding, the process of mapping reference onto sense. Left inferior frontal gyrus (BA 45), left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed adaptation to sense, and therefore appear sensitive to the output of semantic encoding. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This referent is mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. So far, one study targeted semantic encoding in sentence production (Menenti, Segaert & Hagoort, submitted) and none dissected this process further. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. fMRI adaptation is a phenomenon whereby repeating a stimulus property changes the BOLD-response in regions sensitive to that property. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task we distinguished sets of areas sensitive to these steps in semantic encoding in speaking. Methods: In a picture description paradigm, the described situation (the reference) and the linguistic semantic structure (the sense) of subsequently produced sentences were independently repeated across trials. Participants described pictures depicting events involving transitive verbs such as hit, kiss, greet, and two actors colored in different colors with sentences such as ‘The red man greets the green woman’. In our factorial design, the same situation involving the same actors could subsequently be described by two different sentences (repeated reference, novel sense) or the same sentence could subsequently be used to describe two different situations (novel reference, repeated sense). For reference, we controlled for the repetition of actors. For sense, we controlled for the repetition of individual words. See figure 1 for design and stimuli. To correct for increased movement and susceptibility artifacts due to speech, we scanned using 3T-fMRI parallel-acquired inhomogeneity-desensitized fMRI (Poser, Versluis, Hoogduin et al. 2006). Five images were acquired per TR and combined based on local T2* (Buur, Poser and Norris 2009). Results: The behavioral data (response onset, response duration and total time to complete the responses) showed effects of both sense and reference. In the fMRI analyses we looked for areas sensitive to only sense, only reference, or showing a conjunction of both factors. Encoding reference involved the bilateral inferior parietal lobes (BA 39), which showed repetition suppression, and right inferior frontal gyrus (BA 45), which showed repetition enhancement. Left inferior frontal gyrus (BA 45) showed suppression to repetition of sense, while left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed enhancement. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) showed repetition suppression to both sense and reference processing (conjunction analysis with conjunction null). See figure 2 for the results (p<.05 FWE corrected for multiple comparisons at cluster-level, maps thresholded at p<.001 uncorrected voxel-level.) Conclusions: The input to semantic encoding is construction of a referent, a mental representation that the utterance is about. The bilateral temporo-parietal junctions are involved in this process as they show sensitivity to repetition of reference but not sense. RIFG shows enhancement and may therefore be involved in constructing a more comprehensive model spanning several utterances. Semantic encoding itself requires mapping of the reference onto the sense. This involves large parts of the language network: bilateral posterior temporal lobes and upper left inferior frontal gyrus were sensitive to both reference and sense. Finally, sense recruits left inferior frontal gyrus (BA 45). This area is sensitive to syntactic encoding (Bookheimer 2002), the next step in speaking. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies. References: Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic procesing', Annual review of neuroscience, vol. 25, pp. 151-188. Buur, P. (2009), 'A dual echo approach to removing motion artefacts in fMRI time series', Magnetic Resonance in Medicine, vol. 22, no. 5, pp. 551-560. Menenti, L. (submitted), 'The neuronal infrastructure of speaking'. Poser, B. (2006), 'BOLD contrast sensitivity enhancement and artifact reduction with multiecho EPI: parallel-acquired inhomogeneity desensitized fMRI', Magnetic Resonance in Medicine, vol. 55, pp. 1227-1235.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: Identification of the neural signature of a concept is a key challenge in cognitive neuroscience. In recent years, a number of studies have demonstrated the possibility to decode conceptual information from spatial patterns in functional MRI data (Hauk et al., 2008; Shinkareva et al., 2008). An important unresolved question is whether similar decoding performance can be attained using electrophysiological measurements. The development of EEG-based concept decoding algorithms is interesting from an applications perspective, because the high temporal resolution of the EEG allows pattern recognition in real-time. In this study we investigate the possibility to identify conceptual representations from event-related EEG on the basis of the presentation of an object in three different modalities: an object’s written name, it’s spoken name and it’s line drawing. Methods: Twenty-four native Dutch speakers participated in the study. They were presented concepts from three semantic categories: two relevant categories (animals, tools) and a task category. There were four concepts per category, all concepts were presented in three modalities: auditory, visual (line drawings) and textual (written Dutch words). Each item was repeated 80 times (relevant), or 16 times (task) in each modality. The text and picture stimuli were presented for 300 ms. The interval between stimuli had a random duration between 1000-1200 ms. Participants were instructed to respond upon appearance of items from the task category. Continuous EEG was registered using a 64-channel system. The data were divided into epochs of one second starting 300 ms before stimulus onset. We used the time domain representation of the signal as input to the classifier (linear support vector machine, Vapnik, 2000). The classifier was trained to identify which of two semantic categories (animal or tool) was presented to subject. Performance of the classifier was computed as the proportion of correctly classified trials. Significance of the classification outcome was computed using a binomial test (Burges, 1998). In the first analysis we classified the semantic category of stimuli from the entire dataset, with trials of all modalities equally presented. In the second analysis we classified trials within each modality separately. In the third analysis we compared classification performance for the real categories with the classification performance for pseudo-categories to investigate the role of perceptual features of presented objects without transparent contribution of conceptual information. The pseudo-categories were composed by arranging all the concepts into classes randomly in a way that each class contained exemplars of both categories. Results: In the first analysis we assessed the ability to discriminate patterns of EEG signals referring to the representation of animals versus tools across three tested modalities. Significant accuracy was achieved for nineteen out of twenty subjects. The highest achieved classification accuracy across modalities was 0.69 with a mean value of 0.61 over all 20 subjects. To check whether the performance of the classifier was consistent during the experimental session, we visualized the correctness of the classifier’s decisions over the time-course of the session. Fig 1 shows that the classifier identifies more accurately the trials correspond to the picture blocks than the trials of text and audio blocks (Fig.1). To further assess the modality-specific classification performance, we trained and tested the classifiers within each of the individual modalities separately (Fig. 2). For pictures, the highest classification accuracy reached over all subjects was 0.92, and classification was significant (p<0.001) for all 20 subjects with a mean value of 0.80. The classifier for the auditory modality performed significantly better than chance (p<0.001 and p<0.01) in 15 out of 20 subjects with a mean value of 0.60. The classifier for the orthographic modality performed significantly better than chance in 5 out of 20 subjects, with a mean value of 0.56. Comparison of the classification performance for real- and pseudo-categories revealed a high impact of the conceptually driven activity on the classifier’s performance (Fig 3). Mean accuracies of pseudo-category classification over all subjects were 0.56 for pictures, 0.56 for audio, and 0.55 for text. Significant (p<0.005) differences form the real-categories results were found for all pseudo-categories in the picture modality; for eight out of ten pseudo-categories in the auditory modality, and for one out of ten pseudo-categories in the orthographic modality. Conclusions: The results uncover that stable neural patterns induced by the presentation of stimuli of different categories can be identified by EEG. High classification performances were achieved for all subjects. The visual modality appeared to be much easier to classify than the other modalities. This indicates the existence of category-specific patterns in visual recognition of objects (Kiefer 2001; Liu et al., 2009). Currently we are working towards interpreting the patterns found during classification using Bayesian logistic regression. A considerable reduction of performance has been found when using pseudo-categories instead of the real categories. This indicated that the classifier has identified neural activity at the level of conceptual representations. Our results could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept encoding in the context of brain-computer interface applications. References: Burges, C. (1998), 'A tutorial on support vector machines for pattern recognition', Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121-167. Hauk, O. (2008), 'Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging', European Journal Neuroscience, vol. 27, no. 7, pp. 1856-66. Kiefer, M. (2001), 'Perceptual and semantic sources of category-specific effects: Event-Related potentials during picture and word categorization', Memory and Cognition, vol. 29, no. 1, pp. 100-16. Liu, H. (2009), 'Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex', Neuron, vol. 62, no. 2, pp. 281-90. Shinkareva, S. (2008), 'Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings', Plos One, vol. 3, no. 1, pp. e1394.
  • van Leeuwen, T. M., Den Ouden, H. E., & Hagoort, P. (2010). Bottom-up versus top-down: Effective connectivity reflects individual differences in grapheme-color synesthesia. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In grapheme-color synesthesia, letters elicit a color. Neural theories propose that synesthesia is due to changes in connectivity between sensory areas. However, no studies on functional connectivity in synesthesia have been published to date. Here, we applied psycho-physiological interactions (PPI) and dynamic causal modeling (DCM) in fMRI to assess connectivity patterns in synesthesia. We tested whether synesthesia is mediated by bottom-up, feedforward connections from grapheme areas directly to perceptual color area V4, or by top-down feedback connections from the parietal cortex to V4. We took individual differences between synesthetes into account: 'projector'synesthetes experience their synesthetic color in a spatial location, while 'associators'only have a strong association of the color with the grapheme. We included 19 grapheme-color synesthetes (14 projectors, 5 associators) and located group effects of synesthesia in left superior parietal lobule (SPL) and right color area V4. With PPI, taking SPL as a seed region, we found an increase in functional coupling with visual areas (also V4), for the synesthesia condition. With PPI, however, we can not determine the direction of this functional coupling. Based on the GLM results, we specified 2 DCMs to test whether a bottom-up or a top-down model would provide a better explanation for synesthetic experiences. Bayesian Model Selection showed that overall, neither model was much more likely than the other (exceedance probability of 0.589). However, when the models were divided according to projector or associator group, BMS showed that the bottom-up, feedforward model had an exceedance probability of 0.98 for the projectors: it was strongly preferred for this group. The top-down, feedback model was preferred for the associator group (exceedance probability = 0.96). To our knowledge, we are the first to report empirical evidence of changes in functional and effective connectivity in synesthesia. Whether bottom-up or top-down mechanisms underlie synesthetic experiences has been a long-time debate: that different connectivity patterns can explain differential experiences of synesthesia may greatly improve our insight in the neural mechanisms of the phenomenon.
  • Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, spain.

    Abstract

    Introduction: When a 6-year-old girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. Besides linguistic content, the voice also carries information about a person's identity relevant for communication, such as idiosyncratic features related to the gender and age of the speaker (Campanella 2007). A previous ERP study investigated inter-individual differences in the cognitive processes that mediate the integration of social information in a linguistic context (Van den Brink submitted). Individuals with an empathizing-driven cognitive style showed larger ERP effects to mismatching information about the speaker than individuals who empathize to a lesser degree. The present ERP study tested individuals with Autism Spectrum Disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Methods: Participants. The ERP experiment was conducted with 20 Dutch adult males clinically diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women. Materials. Experimental materials consisted of 160 Dutch sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Translated examples of speaker identity (SI) incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an 'upper-class' accent. In addition, participants heard 48 sentences containing classic lexical semantic (LS) anomalies which are pure linguistic violations, known to elicit an N400, matched with semantically congruent sentences (e.g., "You wash your hands with horse and water" vs. "You wash your hands with soap and water"). Procedure. Participants listened to 352 sentences, spoken by 21 different people. They were asked to indicate after each sentence how odd they thought the sentence was, using a 5-point-scale ranging from "perfectly normal" to "extremely odd". Participants filled out Dutch translations of the Autism and Empathizing Questionnaires (AQ: Baron-Cohen 2001; EQ: Baron-Cohen 2004). EEG recording. EEG was recorded from 28 electrodes referenced to the left mastoid. Electrode impedances were below 5 kOhm. Signals were recorded using a 200 Hz low-pass filter, a time constant of 10 sec., and a 500 Hz sampling frequency. After off-line re-referencing of the EEG signals to the mean of the left and right mastoid, they were filtered with a 30 Hz low-pass filter. Segments ranging from 200 ms before to 1500 ms after the acoustic onset of the critical word were baseline-corrected. Segments containing artifacts were rejected (12.7%). Results: Behavioral results. EQ scores differed significantly between groups (p < .001), with average scores of 22.1 for ASD, 40.6 for men, and 52.1 for women. Statistical analysis of the rating data (see Figure 1) consisted of ANOVAs with the within-subject factors Manipulation (LS, SI) and Congruity (congruent, incongruent), and the between-subject factor Group (ASD, men, women). A significant inter­action between Manipulation and Group (p < .01) indicated that the participant groups rated the items differently. For the LS items, a main effect of Congruity (p < .001), but no interaction of Congruity by Group (F < 1) was obtained. For the SI items a main effect of Congruity (p < .001), as well as an interaction of Congruity by Group (p < .01) was found. The ASD group rated the SI violations as less odd than the male and female participant group (2.9 versus 3.4 and 3.7, respectively). In addition, significant positive correlations with EQ score were found for SI effect size (see Figure 2) as well as SI violations (both p < .01). ERP results. Figure 3 displays the ERP waveforms for the three participant groups. Mean amplitude values in the N400 and Late Positive Component latency ranges (300-600 and 700-1000 ms) from 7 centro-parietal electrodes did not reveal a Congruity by Group interaction. However, a significant correlation was found between the size of the SI effect in the N400 latency window and EQ score (p < .01), with individuals who scored high on EQ showing a larger positive effect. Participants were subdivided into three groups based on EQ score; low empathizers (M = 20; 16 ASD, 2 men), medium empathizers (M = 37; 4 ASD, 12 men, 2 women), and high empathizers (M = 53; 8 men, 10 women). See Figure 4 for the SI difference waveforms for the three EQ groups. Individuals who empathize to a larger degree show an earlier and significantly larger positive effect (p < .05), related to decision making than low empathizers (i.e. mostly individuals with ASD). Conclusions: Our results evidently show that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results reveal that individuals who scored low on the EQ had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller positive ERP effect, which has been related to decision-making processing (Nieuwenhuis 2005). We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, as indexed by the behavioral and electrophysiological results for the lexical semantic manipulation. However, differences in onset latency, as well as size of the late positive effect in the speaker identity manipulation, suggest that they do have difficulties with assigning value to social information in language processing. References: Baron-Cohen, S. (2001), 'The Autism spectrum Quotient (AQ): Evidence from Asperger Syndrome/High Functioning Autism, males and females, scientists and mathematicians', Journal of Autism and Developmental Disorders, vol. 31, pp. 5-17. Baron-Cohen, S. (2004), 'The Empathy Quotient: An investigation of adults with Asperger Syndrome or High Functioning Autism, and normal sex differences', Journal of Autism and Developmental Disorders, vol. 34, pp. 163-175. Campanella, S. (2007), 'Integrating face and voice in person perception', Trends in Cognitive Sciences, vol. 11, no. 12, pp. 535-543. Nieuwenhuis, S. (2005), 'Decision making, the P3, and the locus coeruleus-norepinephrine system', Psychological Bulletin, vol. 131, no. 4, pp. 510-532. Van den Brink, D. (submitted), 'Empathy matters: ERP evidence for inter-individual differences in social language processing'.
  • Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    When a young girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. A previous ERP study revealed that individual differences in empathizing affects integration of this type of extra-linguistic, social, information in a linguistic context. The present ERP study tested individuals with autism spectrum disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Twenty adult males diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women participated. Experimental materials consisted of sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Examples of speaker identity incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an "upper-class" accent. In addition, we included a pure linguistic, lexical semantic manipulation (e. g., "You wash your hands with soap/horse and water"). Participants indicated after each spoken sentence, using a five-point scale, how odd they thought the sentence was, while their EEG was recorded. They also filled out a questionnaire on their empathizing ability. Our results reveal that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results show that individuals who scored low on empathizing ability had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller, positive ERP effect, which can be related to decision-making processes. We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, but that they do have difficulties with assigning value to social information in language processing.
  • Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Beta oscillation relates with the Event Related Field during language processing. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: MEG has the advantage of both a high temporal and a spatial resolution in measuring neural activity. The event-related field (ERF) have been extensively explored in psycholinguistic research. For example, the N400m was found to be sensitive to semantic violations (Helenius, 2002). On the other hand, induced oscillatory responses of the EEG and MEG during langauge comprehension are less commonly investigated. Oscillatory dynamics have been shown to also contain relevant information, which can be measured amongst others by time-frequency (TF) analyses of power and /or coherence changes (Bastiaansen & Hagoort, 2006; Weiss et al., 2003). In the present study we explicitly investigate whether there is a (signal-analytic) relationship between MEG oscillatory dynamics (notably power changes) and the N400m. Methods: There were two types of auditory sentences, in which the last words were either semantically congruent (C) or incongruent (IC) with respect to the sentence context. MEG signals were recorded with a 151 sensor CTF Omega System, and MRIs were obtained with a 1.5 T Siemens system. We segmented the MEG data into trials starting 1 s before and ending 2 s after the onset of the critical words. The ERFs were calculated by averaging over trials separately for two conditions. The time frequency representations (TFRs) of the single trials were calculated using a Wavelet technique, after which the TFRs were averaged over trials for both conditions. A cluster-based random permutation test (Maris & Oostenveld, 2007) was used to assess the significance of the difference between the two conditions, both for the ERFs and the TFRs. In order to characterize the relationship between beta power (see results) and N400m, we performed a linear regression analysis between beta power and N400m for the sensors that showed significant differences in ERFs or TFRs between the two conditions. In the end, a beamforming approach [Dynamic Imaging of Coherent Sources (DICS)] was applied to identify the sources of the beta power changes. Results: The ERF analysis showed that approximately between 200ms and 700ms after the onset of the critical words, the IC condition elicited larger amplitudes than the C condition over bilateral temporal areas, with a clear left hemisphere preponderance (Fig. 1A). Statistical analysis revealed significant differences over the left temporal area (Fig. 1B). In a similar time window (200 - 700ms), a beta power suppression (16 - 19 Hz) was found only for the IC condition, but not for the C condition (Fig. 2A). The statistical analysis of the beta power difference between the two conditions revealed a significantly lower beta power for the IC than C condition over left temporal cortex (Fig. 2B). The comparable topographies for N400m and beta differences suggest a relationship between these two effects. In order to evaluate this relationship, we performed a linear regression between beta power and N400m for both IC and C conditions in both the post-stimulus time window (200 - 700ms) and the pre-stimulus time window (-600 - -200ms). In the time window of 200 - 700ms, we found a positive linear regression between beta power and N400m for the IC condition (R = .32, p = .03) but not for the C condition (p = .83). For the IC condition, we found that the lower the beta power, the lower the N400m amplitude. In the time window of -600 - -200ms, the C condition showed a positive linear regression between beta power and N400m (R = .27, p = .06), but the IC condition did not show this (p = .74). The source modeling analysis allows us to estimate the generators of the beta suppression for the IC relative to C condition. The source of the beta suppression (around 18 Hz) within 200 - 700 ms was identified in the left inferior frontal gyrus (LIFG, BA 47) (Fig. 3). Conclusions: The ERF difference between the two conditions is consistent with previous MEG studies. However, it is the first time that the beta power suppression is related with the amplitude of the N400m. When the input is highly predictable (C condition), the lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while the low predictability (IC condition) of the input produced an association between the N400m and the beta power in the post-stimulus interval. Moreover, the generator of the beta suppression was identified in the LIFG, which has been related to semantic unification (Hagoort, 2005). Together with other studies on the role of beta oscillations across a range of cognitive functions (Pfurtscheller, 1996; Weiss, 2005; Hirata, 2007; Bastiaansen, 2009), we propose that beta oscillations generally reflect the engagement of brain networks: a lower beta power indicates a higher engagement for information processing. References: Bastiaansen, M. (2009), ''Oscillatory brain dynamics during language comprehension', Event-Related Dynamics of Brain Oscillations, vol. 159, pp. 182-196. Bastiaansen, M. (2009), ''Syntactic Unification Operations Are Reflected in Oscillatory Dynamics during On-line Sentence Comprehension', Journal of Cognitive Neuroscience, vol. doi: 10.1162/jocn.2009.21283, pp. 1-15. Hagoort, P. . (2005), 'On Broca, brain, and binding: a new framework', Trends in Cognitive Sciences, vol. 9, no. 9, pp. 416-423. Helenius, P. (2002), ''Abnormal auditory cortical activation in dyslexia 100 msec after speech onset', Journal of Cognicition Neuroscience, vol. 14, pp. 603-617. Hirata, M. (2007), 'Effects of the emotional connotations in words on the frontal areas — a spatially filtered MEG study', NeuroImag, vol. 35, pp. 420–429. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', Journal of Neuroscience Methods, vol. 164(1), no. 15, pp. 177-190. Pfurtscheller, G. (1996), 'Post-movement beta synchronization. A correlate of an idling motor area?', Electroencephalography and Clinical Neurophysiology, vol. 98, pp. 281–293. Weiss, S. (2003), 'The contribution of EEG coherence to the investigation of language', Brain and language, vol. 85, pp. 325-343. Weiss, S. (2005), 'Increased neuronal communication accompanying sentence comprehension', International Journal of Psychophysiology, vol. 57, pp. 129-141.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2010). "Chomsky illusion"? ERP evidence for the influence of information structure on syntactic processing. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
  • Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Modulation of the beta rhythm during language comprehension. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Event-related potentials and fields have been extensively explored in psycholinguistic research. However, relevant information might also be contained in induced oscillatory brain responses. We used magnetoencephalograhy (MEG) to explore oscillatory responses elicited by semantically incongruent words in a classical sentence comprehension paradigm. Sentences in which the last word was either semantically congruent or incongruent with respect to the sentence context were presented auditorily. Consistent with previous studies a stronger N400m component was observed over left temporal areas in response to incongruent compared to congruent sentence endings. At the same time, the analysis of oscillatory activity showed a larger beta power decrease (16-19 Hz) for the incongruent than congruent condition in the N400m time window (200-700ms), also over the left temporal area. The relationship between the beta decrease and the N400m was confirmed by a linear regression analysis. Moreover, using a beamforming approach we localized the sources of the beta decrease to the left prefrontal cortex (BA47). We propose that the beta oscillation reflects the engagement of brain networks. A lower beta power indicates a higher engagement for information processing. When the input is highly predictable (congruent condition), a lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while a low predictability (incongruent condition) of the input shows a relationship between the N400m and the beta power in the post-stimulus interval, which indicates the engagement of the brain networks for integrating the unexpected information. This 'engagement'hypothesis is also compatible with reported beta effects in other cognitive domains.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Although language is an effective means of communication, it is unclear how linguistic and communicative abilities relate to each other. In communicative message generation, perspective taking or mentalizing are involved. Some researchers have argued that mentalizing depends on language. In this study, we directly tested the relationship between cerebral structures supporting communicative message generation and language abilities. Healthy participants were scanned with fMRI while they participated in a verbal communication paradigm in which we independently manipulated the communicative intent and linguistic difficulty of message generation. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings indicate that communicative and linguistic abilities rely on different neuro-cognitive architectures. We suggest that the generation of utterances with communicative intent relies on our ability to deal with mental states of other people ("mentalizing"), which seems distinct from language.
  • Zhu, Z., Wang, S., Hagoort, P., Feng, G., Chen, H.-C., & Bastiaansen, M. C. M. (2010). Inferior frontal gyrus is activated during sentence-level semantic unification in both explicit and implicit reading tasks. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
  • Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
  • Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
  • Hagoort, P. (2009). De contouren van een neurobiologische samenleving. Talk presented at De Neurobiologische Samenleving (Plenaire Conferentie van de Sociaal Wetenschappelijke Raad). Leusden, the Netherlands. 2009-06-12.

Share this page