Peter Hagoort

Publications

Displaying 1 - 11 of 11
  • Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2020). Sensory modality-independent activation of the brain network for language. The Journal of Neuroscience, 40(14), 2914-2924. doi:10.1523/JNEUROSCI.2271-19.2020.

    Abstract

    The meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects.
  • Casasanto, D., Casasanto, L. S., Gijssels, T., & Hagoort, P. (2020). The Reverse Chameleon Effect: Negative social consequences of anatomical mimicry. Frontiers in Psychology, 11: 1876. doi:10.3389/fpsyg.2020.01876.

    Abstract

    Bodily mimicry often makes the mimickee have more positive feelings about the mimicker. Yet, little is known about the causes of mimicry’s social effects. When people mimic each other’s bodily movements face to face, they can either adopt a mirrorwise perspective (moving in the same absolute direction) or an anatomical perspective (moving in the same direction relative to their own bodies). Mirrorwise mimicry maximizes visuo-spatial similarity between the mimicker and mimickee, whereas anatomical mimicry maximizes the similarity in the states of their motor systems. To compare the social consequences of visuo-spatial and motoric similarity, we asked participants to converse with an embodied virtual agent (VIRTUO), who mimicked their head movements either mirrorwise, anatomically, or not at all. Compared to participants who were not mimicked, those who were mimicked mirrorwise tended to rate VIRTUO more positively, but those who were mimicked anatomically rated him more negatively. During face-to-face conversation, mirrorwise and anatomical mimicry have opposite social consequences. Results suggest that visuo-spatial similarity between mimicker and mimickee, not similarity in motor system activity, gives rise to the positive social effects of bodily mimicry.
  • Fitz, H., Uhlmann, M., Van den Broek, D., Duarte, R., Hagoort, P., & Petersson, K. M. (2020). Neuronal spike-rate adaptation supports working memory in language processing. Proceedings of the National Academy of Sciences of the United States of America, 117(34), 20881-20889. doi:10.1073/pnas.2000222117.

    Abstract

    Language processing involves the ability to store and integrate pieces of
    information in working memory over short periods of time. According to
    the dominant view, information is maintained through sustained, elevated
    neural activity. Other work has argued that short-term synaptic facilitation
    can serve as a substrate of memory. Here, we propose an account where
    memory is supported by intrinsic plasticity that downregulates neuronal
    firing rates. Single neuron responses are dependent on experience and we
    show through simulations that these adaptive changes in excitability pro-
    vide memory on timescales ranging from milliseconds to seconds. On this
    account, spiking activity writes information into coupled dynamic variables
    that control adaptation and move at slower timescales than the membrane
    potential. From these variables, information is continuously read back into
    the active membrane state for processing. This neuronal memory mech-
    anism does not rely on persistent activity, excitatory feedback, or synap-
    tic plasticity for storage. Instead, information is maintained in adaptive
    conductances that reduce firing rates and can be accessed directly with-
    out cued retrieval. Memory span is systematically related to both the time
    constant of adaptation and baseline levels of neuronal excitability. Inter-
    ference effects within memory arise when adaptation is long-lasting. We
    demonstrate that this mechanism is sensitive to context and serial order
    which makes it suitable for temporal integration in sequence processing
    within the language domain. We also show that it enables the binding of
    linguistic features over time within dynamic memory registers. This work
    provides a step towards a computational neurobiology of language.
  • Hagoort, P. (2020). Taal. In O. Van den Heuvel, Y. Van der Werf, B. Schmand, & B. Sabbe (Eds.), Leerboek neurowetenschappen voor de klinische psychiatrie (pp. 234-239). Amsterdam: Boom Uitgevers.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2020). No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Cortex, 130, 111-126. doi:10.1016/j.cortex.2020.04.035.

    Abstract

    Speakers and listeners usually interact in larger discourses than single words or even single sentences. The goal of the present study was to identify the neural bases reflecting how the mental representation of the situation denoted in a multi-sentence discourse (situation model) is constructed and shared between speakers and listeners. An fMRI study using a variant of the ambiguous text paradigm was designed. Speakers (n=15) produced ambiguous texts in the scanner and listeners (n=27) subsequently listened to these texts in different states of ambiguity: preceded by a highly informative, intermediately informative or no title at all. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation analyses between the speakers’ and the listeners’ hemodynamic time courses were performed. Critically, only the processing of disambiguated, coherent discourse with an intelligible situation model representation involved (shared) activation in bilateral lateral parietal and medial prefrontal regions. This shared spatiotemporal pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval in medial prefrontal regions and the binding of retrieved information in the lateral parietal cortex constitutes a core mechanism underlying the communication of complex conceptual representations.

    Additional information

    supplementary data
  • Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & De Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11: 321. doi:10.1038/s41467-019-13996-4.

    Abstract

    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.

    Additional information

    Supplementary information
  • Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences.
  • Kösem, A., Bosker, H. R., Jensen, O., Hagoort, P., & Riecke, L. (2020). Biasing the perception of spoken words with transcranial alternating current stimulation. Journal of Cognitive Neuroscience, 32(8), 1428-1437. doi:10.1162/jocn_a_01579.

    Abstract

    Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception (Kösem et al. 2018). We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating
    current stimulation (tACS) at distinct oscillatory frequencies (3 Hz and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words
    were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and
    therewith how speech is perceptually sampled, leading to a perceptual over- or underestimation of the vowel’s duration. Whereas results from Experiment 1 did not confirm this prediction, results from experiment 2 suggested a small effect of tACS frequency on target word
    perception: Faster tACS lead to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in experiment 1 vs. experiment 2, suggesting that the
    impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations
    might be a prerequisite for tACS to be effective.

    Additional information

    Data availability
  • Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.

    Abstract

    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
  • Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.

    Abstract

    This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia.
  • Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.

    Abstract

    Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors.

Share this page