Peter Hagoort

Publications

Displaying 1 - 36 of 36
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2019). Consistency influences altered auditory feedback processing. Quarterly Journal of Experimental Psychology, 72(10), 2371-2379. doi:10.1177/1747021819838939.

    Abstract

    Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
  • Hagoort, P. (Ed.). (2019). Human language: From genes and brains to behavior. Cambridge, MA: MIT Press.
  • Hagoort, P., & Beckmann, C. F. (2019). Key issues and future directions: The neural architecture for language. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 527-532). Cambridge, MA: MIT Press.
  • Hagoort, P. (2019). Introduction. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 1-6). Cambridge, MA: MIT Press.
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Heilbron, M., Ehinger, B., Hagoort, P., & De Lange, F. P. (2019). Tracking naturalistic linguistic predictions with deep neural language models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 424-427). doi:10.32470/CCN.2019.1096-0.

    Abstract

    Prediction in language has traditionally been studied using
    simple designs in which neural responses to expected
    and unexpected words are compared in a categorical
    fashion. However, these designs have been contested
    as being ‘prediction encouraging’, potentially exaggerating
    the importance of prediction in language understanding.
    A few recent studies have begun to address
    these worries by using model-based approaches to probe
    the effects of linguistic predictability in naturalistic stimuli
    (e.g. continuous narrative). However, these studies
    so far only looked at very local forms of prediction, using
    models that take no more than the prior two words into
    account when computing a word’s predictability. Here,
    we extend this approach using a state-of-the-art neural
    language model that can take roughly 500 times longer
    linguistic contexts into account. Predictability estimates
    fromthe neural network offer amuch better fit to EEG data
    from subjects listening to naturalistic narrative than simpler
    models, and reveal strong surprise responses akin to
    the P200 and N400. These results show that predictability
    effects in language are not a side-effect of simple designs,
    and demonstrate the practical use of recent advances
    in AI for the cognitive neuroscience of language.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Schoffelen, J.-M., Oostenveld, R., Lam, N. H. L., Udden, J., Hulten, A., & Hagoort, P. (2019). A 204-subject multimodal neuroimaging dataset to study language processing. Scientific Data, 6(1): 17. doi:10.1038/s41597-019-0020-y.

    Abstract

    This dataset, colloquially known as the Mother Of Unification Studies (MOUS) dataset, contains multimodal neuroimaging data that has been acquired from 204 healthy human subjects. The neuroimaging protocol consisted of magnetic resonance imaging (MRI) to derive information at high spatial resolution about brain anatomy and structural connections, and functional data during task, and at rest. In addition, magnetoencephalography (MEG) was used to obtain high temporal resolution electrophysiological measurements during task, and at rest. All subjects performed a language task, during which they processed linguistic utterances that either consisted of normal or scrambled sentences. Half of the subjects were reading the stimuli, the other half listened to the stimuli. The resting state measurements consisted of 5 minutes eyes-open for the MEG and 7 minutes eyes-closed for fMRI. The neuroimaging data, as well as the information about the experimental events are shared according to the Brain Imaging Data Structure (BIDS) format. This unprecedented neuroimaging language data collection allows for the investigation of various aspects of the neurobiological correlates of language.
  • Schoot, L., Hagoort, P., & Segaert, K. (2019). Stronger syntactic alignment in the presence of an interlocutor. Frontiers in Psychology, 10: 685. doi:10.3389/fpsyg.2019.00685.

    Abstract

    Speakers are influenced by the linguistic context: hearing one syntactic alternative leads to an increased chance that the speaker will repeat this structure in the subsequent utterance (i.e., syntactic priming, or structural persistence). Top-down influences, such as whether a conversation partner (or, interlocutor) is present, may modulate the degree to which syntactic priming occurs. In the current study, we indeed show that the magnitude of syntactic alignment increases when speakers are interacting with an interlocutor as opposed to doing the experiment alone. The structural persistence effect for passive sentences is stronger in the presence of an interlocutor than when no interlocutor is present (i.e., when the participant is primed by a recording). We did not find evidence, however, that a speaker’s syntactic priming magnitude is influenced by the degree of their conversation partner’s priming magnitude. Together, these results support a mediated account of syntactic priming, in which syntactic choices are not only affected by preceding linguistic input, but also by top-down influences, such as the speakers’ communicative intent.
  • Sharoh, D., Van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. (2019). Laminar specific fMRI reveals directed interactions in distributed networks during language processing. Proceedings of the National Academy of Sciences of the United States of America, 116(42), 21185-21190. doi:10.1073/pnas.1907858116.

    Abstract

    Interactions between top-down and bottom-up information streams are integral to brain function but challenging to measure noninvasively. Laminar resolution, functional MRI (lfMRI) is sensitive to depth-dependent properties of the blood oxygen level-dependent (BOLD) response, which can be potentially related to top-down and bottom-up signal contributions. In this work, we used lfMRI to dissociate the top-down and bottom-up signal contributions to the left occipitotemporal sulcus (LOTS) during word reading. We further demonstrate that laminar resolution measurements could be used to identify condition-specific distributed networks on the basis of whole-brain connectivity patterns specific to the depth-dependent BOLD signal. The networks corresponded to top-down and bottom-up signal pathways targeting the LOTS during word reading. We show that reading increased the top-down BOLD signal observed in the deep layers of the LOTS and that this signal uniquely related to the BOLD response in other language-critical regions. These results demonstrate that lfMRI can reveal important patterns of activation that are obscured at standard resolution. In addition to differences in activation strength as a function of depth, we also show meaningful differences in the interaction between signals originating from different depths both within a region and with the rest of the brain. We thus show that lfMRI allows the noninvasive measurement of directed interaction between brain regions and is capable of resolving different connectivity patterns at submillimeter resolution, something previously considered to be exclusively in the domain of invasive recordings.
  • Udden, J., Hulten, A., Bendt, K., Mineroff, Z., Kucera, K. S., Vino, A., Fedorenko, E., Hagoort, P., & Fisher, S. E. (2019). Towards robust functional neuroimaging genetics of cognition. Journal of Neuroscience, 39(44), 8778-8787. doi:10.1523/JNEUROSCI.0888-19.2019.

    Abstract

    A commonly held assumption in cognitive neuroscience is that, because measures of human brain function are closer to underlying biology than distal indices of behavior/cognition, they hold more promise for uncovering genetic pathways. Supporting this view is an influential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that common DNA variants in specific candidate genes were associated with altered neural activation in language-related regions of healthy individuals that carried them. In particular, different single-nucleotide polymorphisms (SNPs) of FOXP2 correlated with variation in task-based activation in left inferior frontal and precentral gyri, whereas a SNP at the KIAA0319/TTRAP/THEM2 locus was associated with variable functional asymmetry of the superior temporal sulcus. Here, we directly test each claim using a closely matched neuroimaging genetics approach in independent cohorts comprising 427 participants, four times larger than the original study of 94 participants. Despite demonstrating power to detect associations with substantially smaller effect sizes than those of the original report, we do not replicate any of the reported associations. Moreover, formal Bayesian analyses reveal substantial to strong evidence in support of the null hypothesis (no effect). We highlight key aspects of the original investigation, common to functional neuroimaging genetics studies, which could have yielded elevated false-positive rates. Genetic accounts of individual differences in cognitive functional neuroimaging are likely to be as complex as behavioral/cognitive tests, involving many common genetic variants, each of tiny effect. Reliable identification of true biological signals requires large sample sizes, power calculations, and validation in independent cohorts with equivalent paradigms.

    SIGNIFICANCE STATEMENT A pervasive idea in neuroscience is that neuroimaging-based measures of brain function, being closer to underlying neurobiology, are more amenable for uncovering links to genetics. This is a core assumption of prominent studies that associate common DNA variants with altered activations in task-based fMRI, despite using samples (10–100 people) that lack power for detecting the tiny effect sizes typical of genetically complex traits. Here, we test central findings from one of the most influential prior studies. Using matching paradigms and substantially larger samples, coupled to power calculations and formal Bayesian statistics, our data strongly refute the original findings. We demonstrate that neuroimaging genetics with task-based fMRI should be subject to the same rigorous standards as studies of other complex traits.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Francken, J. C., Meijs, E. L., Ridderinkhof, O. M., Hagoort, P., de Lange, F. P., & van Gaal, S. (2015). Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions. Neuroscience of consciousness, 1. doi:10.1093/nc/niv003.

    Abstract

    Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language–perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. “rise,” “fall”), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language–perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
  • Francken, J. C., Meijs, E. L., Hagoort, P., van Gaal, S., & de Lange, F. P. (2015). Exploring the automaticity of language-perception interactions: Effects of attention and awareness. Scientific Reports, 5: 17725. doi:10.1038/srep17725.

    Abstract

    Previous studies have shown that language can modulate visual perception, by biasing and/
    or enhancing perceptual performance. However, it is still debated where in the brain visual and
    linguistic information are integrated, and whether the effects of language on perception are
    automatic and persist even in the absence of awareness of the linguistic material. Here, we aimed
    to explore the automaticity of language-perception interactions and the neural loci of these
    interactions in an fMRI study. Participants engaged in a visual motion discrimination task (upward
    or downward moving dots). Before each trial, a word prime was briefly presented that implied
    upward or downward motion (e.g., “rise”, “fall”). These word primes strongly influenced behavior:
    congruent motion words sped up reaction times and improved performance relative to incongruent
    motion words. Neural congruency effects were only observed in the left middle temporal gyrus,
    showing higher activity for congruent compared to incongruent conditions. This suggests that higherlevel
    conceptual areas rather than sensory areas are the locus of language-perception interactions.
    When motion words were rendered unaware by means of masking, they still affected visual motion
    perception, suggesting that language-perception interactions may rely on automatic feed-forward
    integration of perceptual and semantic material in language areas of the brain.
  • Francken, J. C., Kok, P., Hagoort, P., & De Lange, F. P. (2015). The behavioral and neural effects of language on motion perception. Journal of Cognitive Neuroscience, 27(1), 175-184. doi:10.1162/jocn_a_00682.

    Abstract

    Perception does not function as an isolated module but is tightly linked with other cognitive functions. Several studies have demonstrated an influence of language on motion perception, but it remains debated at which level of processing this modulation takes place. Some studies argue for an interaction in perceptual areas, but it is also possible that the interaction is mediated by "language areas" that integrate linguistic and visual information. Here, we investigated whether language-perception interactions were specific to the language-dominant left hemisphere by comparing the effects of language on visual material presented in the right (RVF) and left visual fields (LVF). Furthermore, we determined the neural locus of the interaction using fMRI. Participants performed a visual motion detection task. On each trial, the visual motion stimulus was presented in either the LVF or in the RVF, preceded by a centrally presented word (e.g., "rise"). The word could be congruent, incongruent, or neutral with regard to the direction of the visual motion stimulus that was presented subsequently. Participants were faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. Interestingly, the speed benefit was present only for motion stimuli that were presented in the RVF. We observed a neural counterpart of the behavioral facilitation effects in the left middle temporal gyrus, an area involved in semantic processing of verbal material. Together, our results suggest that semantic information about motion retrieved in language regions may automatically modulate perceptual decisions about motion.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. In Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow: the University of Glasgow.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if speech production targets are specified in auditory
    terms, people with better auditory acuity should have more precise speech targets.
    To investigate this, we had participants perform speech perception and production tasks in a counterbalanced order. To assess speech perception acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording.
    We predicted that speech production variability to correlate inversely with discrimination performance.
    The results suggest that people do vary in their production and perceptual abilities, and that better discriminators have more distinctive vowel production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and sheds light on speech production-perception interaction.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2015). Modulations of the auditory M100 in an Imitation Task. Brain and Language, 142, 18-23. doi:10.1016/j.bandl.2015.01.001.

    Abstract

    Models of speech production explain event-related suppression of the auditory cortical
    response as reflecting a comparison between auditory predictions and feedback. The present MEG
    study was designed to test two predictions from this framework: 1) whether the reduced auditory
    response varies as a function of the mismatch between prediction and feedback; 2) whether individual
    variation in this response is predictive of speech-motor adaptation.
    Participants alternated between online imitation and listening tasks. In the imitation task, participants
    began each trial producing the same vowel (/e/) and subsequently listened to and imitated auditorilypresented
    vowels varying in acoustic distance from /e/.
    Results replicated suppression, with a smaller M100 during speaking than listening. Although we did
    not find unequivocal support for the first prediction, participants with less M100 suppression were
    better at the imitation task. These results are consistent with the enhancement of M100 serving as an
    error signal to drive subsequent speech-motor adaptation.
  • Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Buitelaar, J., van Bokhoven, H., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2015). Asymmetry within and around the human planum temporale is sexually dimorphic and influenced by genes involved in steroid hormone receptor activity. Cortex, 62, 41-55. doi:10.1016/j.cortex.2014.07.015.

    Abstract

    The genetic determinants of cerebral asymmetries are unknown. Sex differences in asymmetry of the planum temporale, that overlaps Wernicke’s classical language area, have been inconsistently reported. Meta-analysis of previous studies has suggested that publication bias established this sex difference in the literature. Using probabilistic definitions of cortical regions we screened over the cerebral cortex for sexual dimorphisms of asymmetry in 2337 healthy subjects, and found the planum temporale to show the strongest sex-linked asymmetry of all regions, which was supported by two further datasets, and also by analysis with the Freesurfer package that performs automated parcellation of cerebral cortical regions. We performed a genome-wide association scan meta-analysis of planum temporale asymmetry in a pooled sample of 3095 subjects, followed by a candidate-driven approach which measured a significant enrichment of association in genes of the ´steroid hormone receptor activity´ and 'steroid metabolic process' pathways. Variants in the genes and pathways identified may affect the role of the planum temporale in language cognition.
  • Hagoort, P. (2015). Het talige brein. In A. Aleman, & H. E. Hulshoff Pol (Eds.), Beeldvorming van het brein: Imaging voor psychiaters en psychologen (pp. 169-176). Utrecht: De Tijdstroom.
  • Hagoort, P. (2015). Spiegelneuronen. In J. Brockmann (Ed.), Wetenschappelijk onkruid: 179 hardnekkige ideeën die vooruitgang blokkeren (pp. 455-457). Amsterdam: Maven Publishing.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.

    Abstract

    Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
    that subserve the integration of pointing gestures and speech in visual contexts in comprehension
    are unclear. Here we present the results of an fMRI study in which participants watched images of an
    actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
    revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
    inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
    semantically integrates information across modalities and semiotic domains.
  • Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.

    Abstract

    Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing
  • Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.

    Abstract

    In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production
  • Xiang, H., Van Leeuwen, T. M., Dediu, D., Roberts, L., Norris, D. G., & Hagoort, P. (2015). L2-proficiency-dependent laterality shift in structural connectivity of brain language pathways. Brain Connectivity, 5(6), 349-361. doi:10.1089/brain.2013.0199.

    Abstract

    Diffusion tensor imaging (DTI) and a longitudinal language learning approach were applied to investigate the relationship between the achieved second language (L2) proficiency during L2 learning and the reorganization of structural connectivity between core language areas. Language proficiency tests and DTI scans were obtained from German students before and after they completed an intensive 6-week course of the Dutch language. In the initial learning stage, with increasing L2 proficiency, the hemispheric dominance of the BA6-temporal pathway (mainly along the arcuate fasciculus) shifted from the left to the right hemisphere. With further increased proficiency, however, lateralization dominance was again found in the left BA6-temporal pathway. This result is consistent with reports in the literature that imply a stronger involvement of the right hemisphere in L2-processing especially for less proficient L2-speakers. This is the first time that a L2-proficiency-dependent laterality shift in structural connectivity of language pathways during L2 acquisition has been observed to shift from left to right, and back to left hemisphere dominance with increasing L2-proficiency. We additionally find that changes in fractional anisotropy values after the course are related to the time elapsed between the two scans. The results suggest that structural connectivity in (at least part of) the perisylvian language network may be subject to fast dynamic changes following language learning

Share this page