Peter Hagoort

Presentations

Displaying 1 - 75 of 75
  • Arana, S., Rommers, L., Hagoort, P., Snijders, T. M., & Kösem, A. (2016). The role of entrained oscillations during foreign language listening. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
  • Belavina Kuerten, A., Mota, M., Segaert, K., & Hagoort, P. (2016). Syntactic priming effects in dyslexic children: A study in Brazilian Portuguese. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Dyslexia is a learning disorder caused primarily by a phonological processing deficit. So far, few studies have examined whether dyslexia deficits extend to syntactic processing. We investigated how dyslexic children process syntactic structures. In a self-paced reading syntactic priming paradigm, the passive voice was repeated in mini-blocks of five sentences. These were mixed with an equal number of filler mini-blocks (actives, intransitives); the verb was repeated within all mini-blocks. The data of 20 dyslexic children (Mean(age)=12,8), native speakers of Brazilian Portuguese, were compared to that of 25 non-dyslexic children (Mean(age)=10,4 years). A repeated-measures ANOVA on reading times for the verb revealed a significant sentence repetition (p<.001) and group by sentence repetition effect (p<.001). Dyslexics demonstrated priming effects between all consecutive passive voice repetitions (all p<.05), whereas reading times for controls differed only between the first and second passive (p<.001). For active sentences, dyslexics showed priming effects only between the first and second sentences (p<.05) while controls did not show any significant effect, suggesting that the effects for passives are not solely due to the verb being repeated, but at least in part due to the repeated syntactic structure. These findings thus reveal syntactic processing differences between dyslexic and non-dyslexic children.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    In certain situations, human listeners have more difficulty in understanding speech in a multi-talker environment than in the presence of non-intelligible noise. The costs of speech-in-speech masking have been attributed to informational masking, i.e. to the competing processing of the target and the distractor speech’s information. It remains unclear what kind of information is competing, as intelligible speech and unintelligible speech-like signals (e.g. reversed, noise-vocoded, and foreign speech) differ both in linguistic content and in acoustic information. Thus, intelligible speech could be a stronger distractor than unintelligible speech because it presents closer acoustic information to the target speech, or because it carries competing linguistic information. In this study, we intended to isolate the linguistic component of speech-in-speech masking and we tested its influence on the comprehension of target speech. To do so, 24 participants performed a dichotic listening task in which the interfering stimuli consisted of 4-band noise-vocoded sentences that could become intelligible through training. The experiment included three steps: first, the participants were instructed to report the clear target speech from a mixture of one clear speech channel and one unintelligible noise-vocoded speech channel; second, they were trained on the interfering noise-vocoded sentences so that they became intelligible; third, they performed the dichotic listening task again. Crucially, before and after training, the distractor speech had the same acoustic features but not the same linguistic information. We thus predicted that the distracting noise-vocoded signal would interfere more with target speech comprehension after training than before training. To control for practice/fatigue effects, we used additional 2-band noise-vocoded sentences, which participants were not trained on, as interfering signals in the dichotic listening tasks. We expected that performance on these trials would not change after training, or would change less than that on trials with trained 4-band noise-vocoded sentences. Performance was measured under three SNR conditions: 0, -3, and -6 dB. The behavioral results are consistent with our predictions. The 4-band noise-vocoded signal interfered more with the comprehension of target speech after training (i.e. when it was intelligible) compared to before training (i.e. when it was unintelligible), but only at SNR -3dB. Crucially, the comprehension of the target speech did not change after training when the interfering signals consisted of unintelligible 2-band noise-vocoded speech sounds, ruling out a fatigue effect. In line with previous studies, the present results show that intelligible distractors interfere more with the processing of target speech. These findings further suggest that speech-in-speech interference originates, to a certain extent, from the parallel processing of competing linguistic content. A magnetoencephalography study with the same design is currently being performed, to specifically investigate the neural origins of informational masking.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the 8th Speech in Noise Workshop (SpiN), Groningen, The Netherlands.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2016). A spiking recurrent network for semantic processing. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Franken, M. K., Schoffelen, J.-M., McQueen, J. M., Acheson, D. J., Hagoort, P., & Eisner, F. (2016). Neural correlates of auditory feedback processing during speech production. Poster presented at New Sounds 2016: 8th International Conference on Second-Language Speech, Aarhus, Denmark.

    Abstract

    An important aspect of L2 speech learning is the interaction between speech production and perception. One way to study this interaction is to provide speakers with altered auditory feedback to investigate how unexpected auditory feedback affects subsequent speech production. Although it is generally well established that speakers on average compensate for auditory feedback perturbations, even when unaware of the manipulation, the neural correlates of responses to perturbed auditory feedback are not well understood. In the present study, we provided speakers with auditory feedback that was intermittently pitch-shifted, while we measured the speaker’s neural activity using magneto-encephalography (MEG). Participants were instructed to vocalize the Dutch vowel /e/ while they tried to match the pitch of a short tone. During vocalization, participants received auditory feedback through headphones. In half of the trials, the pitch in the feedback signal was shifted by -25 cents, starting at a jittered delay after speech onset and lasting for 500ms. Trials with perturbed feedback and control trials (with normal feedback) were in random order. Post-experiment questionnaires showed that none of the participants was aware of the pitch manipulation. Behaviorally, the results show that participants on average compensated for the auditory feedback by shifting the pitch of their speech in the opposite (upward) direction. This suggests that even though participants were not aware of the pitch shift, they automatically compensate for the unexpected feedback signal. The MEG results show a right-lateralized response to both onset and offset of the pitch perturbation during speaking. We suggest this response relates to detection of the mismatch between the predicted and perceived feedback signals, which could subsequently drive behavioral adjustments. These results are in line with recent models of speech motor control and provide further insights into the neural correlates of speech production and speech feedback processing.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Hagoort, P. (2016). Beyond the core networks of language. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    Speakers and listeners do more than exchanging propositional content. They try to get things done with their utterances. For speakers this requires planning of utterances with knowledge about the listener in mind, whereas listeners need to make inferences that go beyond simulating sensorimotor aspects of propositional content. For example, the statement "It is hot in here" will usually not be answered with a statement of the kind "Yes, indeed it is 32 degrees Celsius", but rather with the answer "I will open the window", since the listener infers the speaker's intention behind her statement. I will discuss a series of studies that identify the network of brain regions involved in audience design and inferring speaker meaning. Likewise for indirect replies that require conversational implicatures, as in A: "Did you like my talk?" to which B replies: "It is hard to give a good presentation." I will show that in these cases the core language network needs to be extended with brain systems providing the necessary inferential machinery
  • Hagoort, P. (2016). De magie van het talige brein. Talk presented at the Akademie van Kunsten. Amsterdam, The Netherlands. 2016-01.
  • Hagoort, P. (2016). Dutch science on the move. Talk presented at the Donders Institute for Brain, Cognition and Behaviour. Nijmegen, The Netherlands. 2016-06.
  • Hagoort, P. (2016). Cognitive enhancement: A few observations and remarks. Talk presented at the LUX. Nijmegen, The Netherlands. 2016-02.
  • Hagoort, P. (2016). Language from an embrained perspective: It is hard to give a good presentation. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). Healthy Brain. Talk presented at the Meeting Ministry of Economic Affairs. Papendal, The Netherlands. 2016-09.
  • Hagoort, P. (2016). Healthy brain initiative. Talk presented at the Radboud University. Nijmegen, the Netherlands. 2016-06.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Het talige brein. Talk presented at the Studiedag Regionaal Instituut Dyslexie (RID). Arnhem, the Netherlands. 2016-11-19.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Neuroanatomy of language [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Hagoort, P. (2016). The neurobiology of morphological processing. Talk presented at the MPI Workshop Morphology in the Parallel Architecture. Nijmegen, The Netherlands. 2016-03-18.
  • Hagoort, P. (2016). Wetenschap is emotie. Talk presented at the opening InScience Filmfestival. Nijmegen, The Netherlands. 2016-11-02.
  • Hagoort, P. (2016). The toolkit of cognitive neuroscience. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). The toolkit of cognitive neuroscience. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). Towards team science. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). How social opinion influences syntactic processing - an investigation using Virtual Reality. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Adapting your grammatical preferences to match that of your interlocutor, a phenomenon known as structural priming, can be influenced by the social opinion you have of your interlocutor. However, the direction and reliability of this effect is unclear as different studies have reported seemingly contrary results. When investigating something as abstract as social opinion, there are numerous differences between the studies that could be causing the differing results. We have operationalized social opinion as the ratings of favorability for a wide range of different avatars in a virtual reality study. This way we can accurately determine how the strength of the structural priming effect changes with differing social opinions. . Our results show an inverted U-shaped curve in passive structure repetition as a function of favorability: the participants showed the largest priming effects for the avatar with average favorability ratings, with a decrease when interacting with the least- or most-favorable avatars. This result suggests that the relationship between social opinion and priming magnitude may not be a linear one, contrary to what the literature has been assuming. Instead there is 'happy medium' which evokes the highest priming effect and on either side of this ideal is a decrease in priming
  • Heyselaar, E., Segaert, K., Walvoort, S., Kessels, R., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff?s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Our control group didn't show a priming tendency, presumably due to cognitive interference between declarative and non-declarative memory systems. To verify the absence of the effect in the controls, we ran an independent group of 54 participants on the same paradigm that also showed no priming effect. The results are further discussed in relation to amnesia, aging and compensatory mechanisms
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff’s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. We used the traffic-light design for the syntactic priming task: the actors in the prime trial photos were colour-coded and the participants were instructed to name the 'green' actor before the 'red' actor in the picture. This way we can control which syntactic structure the participant uses to describe the photo. For target trials, the photos were grey-scale so there was no bias towards one structure over another. This set-up allows us to ensure the primes are properly encoded. In addition to the priming task, we also measured declarative memory, implicit learning ability, and verbal IQ from all participants. Memory tests supported the claim that our 17 patients did have a severely impaired declarative memory system, yet a functional implicit/procedural one. The control group showed no deficit in any of the control measurements. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Unexpectedly, our healthy control group did not show a priming tendency. In order to verify the absence of a syntactic priming effect in the healthy controls, we ran an independent group of 54 participants with the exact same paradigm. The results replicated the earlier findings such that there was no priming effect compared to baseline. This lack of priming ability in the healthy older population could be due to cognitive interference between declarative and non-declarative memory systems, which increases as we get older (mean age of the control group is 62 years).
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Visual attention influences language processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Research into the interaction between attention and language has mainly focused on how language
    influences attention. But how does attention influence language
    ?
    Considering we are constantly
    bombarded with attention grabbing stimuli unrelated to the conversation we are conducting, this is
    certainly an interesting topic of investigation. In this study we aim to uncover how limiting attentional
    resources influences
    language behaviour. We focus on syntactic priming: a task which captures how
    participants adapt their syntactic choices to their partner. Participants simultaneously conducted a
    motion
    -
    object tracking (MOT) task, a task commonly used to tax attentional re
    sources. We thus
    measured participants

    ability to process syntax while their attention is not
    -
    , slightly
    -
    , or overly
    -
    taxed.
    We observed an inverted U
    -
    shaped curve on priming magnitude when conducting the MOT task
    concurrently with prime sentences, but no
    effect when conducted with target sentences. Our results
    illustrate how, during the prime phase of the syntactic priming task, attention differentially affects
    syntactic processing whereas during the target phase there is no effect of attention on language
    behaviour. We explain these results in terms of the implicit learning necessary to prime and how
    different levels of attention taxation can either impair or enhance the way language is encoded
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension.
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Lockwood, G., Drijvers, L., Hagoort, P., & Dingemanse, M. (2016). In search of the kiki-bouba effect. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The kiki-bouba effect, where people map round shapes onto round sounds (such as [b] and [o]) and spiky shapes onto “spiky” sounds (such as [i] and [k]), is the most famous example of sound symbolism. Many behavioural variations have been reported since Köhler’s (1929) original experiments. These studies examine orthography (Cuskley, Simner, & Kirby, 2015), literacy (Bremner et al., 2013), and developmental disorders (Drijvers, Zaadnoordijk, & Dingemanse, 2015; Occelli, Esposito, Venuti, Arduino, & Zampini, 2013). Some studies have suggested that the cross-modal associations between linguistic sound and physical form in the kiki-bouba effect are quasi-synaesthetic (Maurer, Pathman, & Mondloch, 2006; Ramachandran & Hubbard, 2001). However, there is a surprising lack of neuroimaging data in the literature that explain how these cross-modal associations occur (with the exceptions of Kovic et al. (2010)and Asano et al. (2015)). We presented 24 participants with randomly generated spiky or round figures and 16 synthesised, reduplicated CVCV (vowels: [i] and [o], consonants: [f], [v], [t], [d], [s], [z], [k], and [g]) nonwords based on Cuskley et al. (2015). This resulted in 16 nonwords across four conditions: full match, vowel match, consonant match, and full mismatch. Participants were asked to rate on a scale of 1 to 7 how well the nonword fit the shape it was presented with. EEG was recorded throughout, with epochs timelocked to the auditory onset of the nonword. There were significant behavioural effects of condition (p<0.0001). Bonferroni t-tests show participants rated full match more highly than full mismatch nonwords. However, there was no reflection of this behavioural effect in the ERP waveforms. One possible reason for the absence of an ERP effect is that this effect may jitter over a broad latency range. Currently oscillatory effects are being analysed, since these are less dependent on precise time-locking to the triggering events.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized size-sound sound symbolism. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Schoot, L., Stolk, A., Hagoort, P., Garrod, S., Segaert, K., & Menenti, L. (2016). Finding your way in the zoo: How situation model alignment affects interpersonal neural coupling. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION: We investigated how speaker-listener alignment at the level of the situation model is reflected in inter-subject correlations in temporal and spatial patterns of brain activity, also known as between-brain neural coupling (Stephens et al., 2010). We manipulated the complexity of the situation models that needed to be communicated (simple vs complex situation model) to investigate whether this affects neural coupling between speaker and listener. Furthermore, we investigated whether the degree to which alignment was successful was positively related to the degree of between-brain coupling. METHOD: We measured neural coupling (using fMRI) between speakers describing abstract zoo maps, and listeners interpreting those descriptions. Each speaker described both a ‘simple’ map, a 6x6 grid including five animal locations, and a ‘complex’ map, an 8x8 grid including 7 animal locations, from memory, and with the order of map description randomized across speakers. Audio-recordings of the speakers’ utterances were then replayed to the listeners, who had to reconstruct the zoo maps on the basis of their speakers’ descriptions. On the group level, we used a GLM approach to model between-brain neural coupling as a function of condition (simple vs complex map). Communicative success, i.e. map reproduction accuracy, was added as a covariate. RESULTS: Whole brain analyses revealed a positive relationship between communicative success and the strength of speaker-listener neural coupling in the left inferior parietal cortex. That is, the more successful listeners were in reconstructing the map based on what their partner described, the stronger the correlation between that speaker and listener's BOLD signals in that area. Furthermore, within the left inferior parietal cortex, pairs in the complex situation model condition showed stronger between-brain neural coupling than pairs in the simple situation model condition. DISCUSSION: This is the first two-brain study to explore the effects of complexity of the communicated situation model and the degree of communicative success on (language driven) between-brain neural coupling. Interestingly, our effects were located in the inferior parietal cortex, previously associated with visuospatial imagery. This process likely plays a role in our task in which the communicated situation models had a strong visuospatial component. Given that there was more coupling the more situation models were successfully aligned (i.e. map reproduction accuracy), it was surprising that we found stronger coupling in the complex than the simple situation model condition. We plan in ROI analyses in primary auditory, core language, and discourse processing regions. The present findings open the way for exploring the interaction between situation models and linguistic computations during communication.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Maybe syntactic alignment is not affected by social goals?. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Although it is suggested that linguistic alignment can be influenced by speakers' relationship with their listener, previous studies provide inconsistent results. We tested whether speakers' desire to be liked affects syntactic alignment, and simultaneously assessed whether alignment affects perceived likeability. Primed participants (PPs) were therefore primed by another naive participant (Evaluator). PP and Evaluator took turns describing photographs with active/passive sentences. Unknown to PP, we controlled Evaluator's syntax by having them read out sentences. PPs' desire to be liked was manipulated by assigning pairs to a Control (secret evaluation by Evaluator), Evaluation (PPs were aware of evaluation), or Directed Evaluation (PPs knew about the evaluation and were instructed to make a positive impression) condition. PPs showed significant syntactic alignment (more passives produced after passive primes). However, there was no interaction with condition: PPs did not align more in the (Directed) Evaluation than in the Control condition. Our results thus do not support the conclusion that speakers' desire to be liked affects syntactic alignment. Furthermore, there was no reliable relationship between syntactic alignment and how likeable PPs appeared to their Evaluator: there was a negative effect in the Control and Evaluation conditions, but no relationship in the Directed Evaluation condition.
  • Sharoh, D., van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. G. (2016). Investigation of depth-dependent BOLD during language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Neocortex is known to be histologically organized with respect to depth, and neuronal connections across cortical layers form part of the brain's functional organization[1]. Efferent (outgoing) and afferent (incoming) inter-regional connections are found to originate and terminate at different depths, and this structure relates to the internal/external origin of neuronal activity. Specifically, efferent, inter-regional connections are associated with internally directed, top-down activity; afferent inter-regional connections are associated with bottom-up activity originating from external stimulation. The contribution of top-down and bottom-up neuronal activity to the BOLD signal can perhaps be inferred from depth-related fluctuations in BOLD. By dissociating top-down from bottom-up effects in fMRI, investigators could observe the relative contribution of internally and externally generated activity to the BOLD signal, and potentially test hypotheses regarding the directionality of BOLD connectivity. Previous investigation of depth-dependent BOLD has focused on human visual cortex[2]. In the present work, we have designed an experiment to serve as a proof of principle that (1) depth-dependent BOLD can be measured in higher cortical areas during a language processing task, and (2) that differences in the relative contribution of the BOLD signal at discrete depths, to the total BOLD signal, vary as a function of experimental condition. Data were collected on the Siemens 7T scanner at the Hahn Institute in Essen, Germany. Submillimeter (0.8mm3), T1-weighted data were acquired using MP2RAGE, along with near whole-brain, submillimeter (0.9x0.9x0.943mm x112 slices) 3D-EPI task data. The field of view fully covered bilateral temporal and fusiform regions, but excluded superior brain areas on the order of several centimeters. Participants were presented with an event-related paradigm involving the presentation of words, pseudowords and nonwords in visual and auditory modalities. Only the visual modality is discussed here. Cortical segmentation was performed using FreeSurfer's surface-pipeline. We parcellated the gray matter volume into discrete depths, and the analysis of depth-dependent BOLD was performed with the Laminar Analysis Toolbox (van Mourik). Further analysis was performed using FreeSurfer, AFNI and in-house MATLAB code. Regions included in the depth-dependent analysis were determined by first-level analysis. We have presently collected data from 10 participants. 4 were excluded due to equipment malfunction. In the first-level analysis (volume registration, smoothing, GLM, and significance testing), we observe fusiform activation for Realword>Nonword and Pseudoword>Nonword contrasts. These contrasts additionally show activation along middle temporal gyrus. The depth-dependent analysis was performed on fusiform clusters generated during the first-level analysis. These clusters appeared to show depth-dependent signal differences as a function of experimental condition. We suspect these differences may be related to layer-specific activation and reflect the relative contribution of top-down and bottom-up activity in the observed signal. These are preliminary results, and part of an ongoing effort to establish novel, depth-dependent analysis techniques in higher cortical areas and within the language domain. Future analysis will investigate the nature of the depth-dependent differences and the connectivity profiles of depth-dependent variation among distal cortical regions.[1]DouglasR.J.&MartinK.A.C.(2004).Neuronal Circuits of the Neocortex.Annual Review of Neuroscience,27,419-551.[2]Kok,P.,et al.(2016).Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.Current Biology,26,371-376.
  • Tan, Y., Acheson, D. J., & Hagoort, P. (2016). Moving beyond single words: Dissociating levels of linguistic representation in short-term memory (STM). Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    This study assessed the role of semantic, phonological, and grammatical levels of representation in short-term list recall through a 2 (meaningfulness) × 2 (phonological similarity) ×2 (grammaticality) manipulation. Dutch subjects (Experiment 1-2) and English subjects (Experiment 3-4) and seven aphasic patients (Experiment 5) were required to recall lists consisting of adjective-noun word-pairs. Within each list, meaningfulness was manipulated by pairing adjectives and nouns in a meaningful or non-meaningful way; phonological similarity was manipulated through the degree of phonological overlap between words; grammaticality was manipulated through the order of the adjective and noun within each word pair in English (e.g., “salty mea”´ vs. “meat salty”) and through morphological agreement in Dutch. Overall, subjects showed better recall for words in the meaningful, phonologically-dissimilar, and grammatical conditions. Moreover, by relating these main effects to subjects' phonological and semantic STM capacity, we found that subjects with better phonological STM were less affected by the meaningfulness manipulation, while subjects with better semantic STM were less affected by the phonological manipulations. These results demonstrated that there are multiple routes to group information in STM via the combinatorial constraints afforded by language, and subjects might benefit from additional cues when memory load is high in certain level(s).
  • Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N., Kempen, G., Petersson, K. M., & Hagoort, P. (2016). Dynamics of supramodal unification processes during sentence comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    It is generally assumed that structure building processes in the spoken and written modalities are subserved by modality-independent lexical, morphological, grammatical, and conceptual processes. We present a large-scale neuroimaging study (N=204) on whether the unification of sentence structure is supramodal in this sense, testing if observations replicate across written and spoken sentence materials. The activity in the unification network should increase when it is presented with a challenging sentence structure, irrespective of the input modality. We build on the well-established findings that multiple non-local dependencies, overlapping in time, are challenging and that language users disprefer left- over right-branching sentence structures in written and spoken language, at least in the context of mainly right-branching languages such as English and Dutch. We thus focused our study with Dutch participants on a left-branching processing complexity measure. Supramodal effects of left-branching complexity were observed in a left-lateralized perisylvian network. The left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching processing complexity. The left anterior middle temporal gyrus (LaMTG) and left inferior parietal lobe (LIPL) were also significant, although less specifically. The LaMTG was increasingly active also for sentences with increasing right-branching processing complexity. A direct comparison between left- and right-branching processing complexity yielded activity in an LIFG ROI for left > right-branching complexity, while the right > left contrast showed no activation. Using a linear contrast testing for increases in the left-branching complexity effect over the sentence, we found significant activity in LIFG and LpMTG. In other words, the activity in these regions increased from sentence onset to end, in parallel with the increase of the left-branching complexity measure. No similar increase was observed in LIPL. Thus, the observed functional segregation during sentence processing of LaMTG and LIPL vs. LIFG and LpMTG is consistent with our observation of differential activation changes in sensitivity to left- vs. right-branching structure. While LIFG, LpMTG, LaMTG and LIPL all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to the subprocesses of unification. Our results speak to the high processing costs of (1) simultaneous unification and (2) maintenance of constituents that are not yet attached to the already unified part of the sentence. Sentences with high left- (compared to right-) branching complexity impose an added load on unification. We show that this added load leads to an increased BOLD response in left perisylvian regions. The results are relevant for understanding the neural underpinnings of the processing difficulty linked to multiple, overlapping non-local dependencies. In conclusion, we used the left- and right branching complexity measures to index this processing difficulty and showed that the unification network operates with similar spatiotemporal dynamics over the course of the sentence, during unification of both written and spoken sentences.
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
    Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
    Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.
  • Basnakova, J., Weber, K., Petersson, K. M., Hagoort, P., & Van Berkum, J. J. A. (2010). Understanding speaker meaning: Neural correlates of pragmatic inferencing in language comprehension. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: Natural communication is not only literal, but to a large extent also inferential. For example, sometimes people say "It is hard to give a good presentation" to actually mean "Your talk was a mess!", and listeners need to infer the speaker’s hidden message. In spite of the pervasiveness of this phenomenon in everyday communication, and even though the hidden meaning is often what it’s all about, very little is known about how the brain supports the comprehension of indirect language. What are the neural systems involved in the inferential process , and how are they different from those involved in word- and sentence-level meaning processing? We investigated the neural correlates of this so-called pragmatic inferencing in an fMRI study involving natural spoken dialogue. Methods: As a test case, we focused on the inferences needed to understand indirect replies. 18 native listeners of Dutch listened to dialogues ending in a question-answer (QA) pair. The final and critical utterance, e.g., "It is hard to give a good presentation", had different meanings depending on the dialogue context and the immediately preceding question: (1) Direct reply: Q: "How is it to give a good presentation?" A: "It is hard to give a good presentation" (2) Indirect reply, neutral: Q: "Will you give a presentation at the conference?" (rather than a poster) A: "It is hard to give a good presentation" (3) Indirect reply, face-saving: Q: "Did you like my presentation?" A: "It is hard to give a good presentation" While one of the indirect conditions was neutral, the other involved a socio-emotional aspect, as the reason for indirectness was to 'save one’s face' (as in excuses or polite refusals). Participants were asked to pay attention to the dialogues and, to ensure the latter, occasionally received a comprehension question (on filler items only). No other task demands were imposed. Results: Relative to direct replies in exchanges like (1), the indirect replies in exchanges like (2) and (3) activated brain structures associated with theory of mind and inferencing: right angular gyrus (TPJ), right DM prefrontal / frontal cortex (SMA, ACC). Both types of indirect replies also bilaterally activated the insula, an area known to be involved in empathy and affective processing. Moreover, both types of indirect replies recruited bilateral inferior frontal gyrus, thought to play a role in situation model updating. The comparison between neutral (2) and face-saving (3) indirect replies revealed that the presumed affective load of the face-saving replies activated just one additional area: right inferior frontal gyrus; we did not see any activation in classic affect-related areas. Importantly, we used the same critical sentences in all conditions. Our results can thus not be explained by lexico-semantic or other (e.g. syntactic, word frequency) factors. Conclusions: To extend neurocognitive research on meaning in language beyond the level of straightforward literal utterances, we investigated the neural correlates of pragmatic inferencing in an fMRI study involving indirect replies in natural spoken dialogue. Our findings reveal that the areas used to infer the intended meaning of an implicit message are partly different from the classic language network. Furthermore, the identity of the areas involved is consistent with the idea that inferring hidden meanings requires taking the speaker’s perspective. This confirms the importance of perspective taking in language comprehension, even in a situation where the listener is not the one addressed. Also, as the areas recruited by indirect replies generally do not light up in standard fMRI sentence comprehension paradigms, our study testifies to the importance of studying language understanding in richer contexts in which we can tap aspects of pragmatic processing, beyond the literal code.
  • Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Theoretical accounts (e.g. Jackendoff, 2007; see also Hagoort, 2005) propose that unification operations occur in parallel at the phonological, syntactic and semantic levels. Meta-analysis of fMRI studies (Bookheimer, 2002) shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during the different types of unification operations. This raises the question of how the brain functionally segregates these different unification operations. Previously, we have established that semantic unification modulates oscillatory EEG activity in the gamma frequency range (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Hald, Bastiaansen, & Hagoort, 2005). More recently, we have shown that syntactic unification modulates MEG activity in the lower beta frequencies (13-18 Hz). Here we report a fully within-subjects replication of these findings. Methods: We recorded the EEG (64 channels, filtered from 0.1 - 100 Hz) of 30 subjects while they read sentences presented in serial visual presentation mode. Sentences were either correct (COR), contained a semantic violation (SEM), or a syntactic (grammatical gender agreement) violation (SYN). Two additional conditions were constructed on the basis of COR sentences by (1) replacing all the nouns, verbs and adjectves with semantically unrelated ones that were matched for length and frequency, making the sentences semantically ininterpretable (global semantic violation, GSEM, and (2) randomly re-assigning word order of the COR sentences, so as to remove overall syntactic structure from the sentences (global syntactic violation, GSYN). Here we only report the results of analyses on the COR, GSEM and GSYN conditions. EEG epochs from 1s preceding sentence onset to 6s after sentence onset (corresponding to the first 10 words in each sentence) were extracted from the EEG recordings, and epochs with artifacts were removed. A multitaper-based time-frequency (TF) analysis of power changes (Mitra & Pesaran, 1999) was performed, separately for a low-frequency window (1-30 Hz) and high-frequency window (25-100 Hz). Significant differences in the TF representations between any two conditions were established unsing non-parametric random permutation analysis (Maris & Oostenveld, 2007). Results: Semantic unification: gamma Figure 1 presents the comparison between the TF responses of the semantically intact condition (COR) and those of the semantically incorrect ones (GSEM, but also GSYN, since the absence of syntactic structure makes the sentence semantically uninterpretable as well). Both the COR-GSEM and the COR-GSYN contrasts show significantly larger power for the semantically correct sentences in a frequency range around 40 Hz (as well as some less consistent differences in higher frequencies). No differences were observed between GSEM and GSYN in the frequency range 25-100 Hz. Syntactic unification: beta Figure 2 presents the conparison between the TF responses of the syntactically correct conditions (COR and GSEM) and the incorrect one (GSYN). Both the COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. No significant differences were observed between COR and GSEM in the frequency range 1-30 Hz. Conclusions: During the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) slowly increase as the sentence unfolds. When a sentence is devoid of syntactic structure, the beta increase is absent. When a sentence is devoid of semantically co=herent structure, the gamma increase is absent. Together the data show a fully within-subjects confirmation of previously obtained results in separate experiments (for review, see Bastiaansen & Hagoort, 2006). This suggests that neuronal synchronization in LIFG at gamma frequencies is related to semantic unification, whereas synchronization at beta frequencies is related to syntactic unification. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations in language comprehension. References: Bastiaansen, M. (2006), 'Oscillatory neuronal dynamics during language comprehension.', Prog Brain Res, vol. 159, pp. 179-196. Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic processing', Annu Rev Neurosci, vol. 25, pp. 151-188. Hagoort, P. (2005), 'On Broca, brain, and binding: a new framework.', Trends Cogn Sci,, vol. 9, no. 9, pp. 416-423. Hagoort, p. (2004), 'Integration of word meaning and world knowledge in language comprehension', Science, vol. 304, no. 5669, pp. 438-441. Hald, L. (2005), 'EEG theta and gamma responses to semantic violations in online sentence processing', Brain & Language, vol. 96, no. 1, pp. 90-105.. Jackendoff, R. (2007), 'A Parallel Architecture perspective on language processing', Brain research, vol. 1146, pp. 2-22. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', J Neurosci Methods, vol. 164, no. 1, pp. 177-190. Mitra, P. (1999), 'Analysis of dynamic brain imaging data.', Biophys. J., vol. 76, no. 2, pp. 691-708.
  • Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Unification operations occur in parallel at the phonological, syntactic and semantic levels, and meta-analyses of fMRI studies shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during different unification operations. How does the brain functionally segregate these different operations? Previously we established that semantic unification modulates oscillatory EEG activity in the gamma frequency range, and that syntactic unification modulates MEG in the beta range. We propose that there is functional segregation of syntactic and semantic unification in LIFG based on frequency-coding. We report a within-subjects replication of the previous findings. Subjects read visually presented sentences that were either correct (COR), semantically incorrect (by replacing the nouns, verbs, adjectives of the COR sentences with semantically unrelated ones) or semantically and syntactically incorrect (by randomizing word order of the COR sentences). Time-frequency analysis of power was performed on EEG epochs corresponding to entire sentences. The COR-GSEM and the COR-GSYN contrasts show larger power for the semantically correct sentences in a frequency range around 40 Hz. . The COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. In sum, during the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) increase. When a sentence is devoid of syntactic structure, the beta increase is absent, when there is no semantic structure the gamma increase is absent. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations.
  • Folia, V., Hagoort, P., & Petersson, K. M. (2010). Broca's region: Implicit sequence learning and natural syntax processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In an event-related fMRI study, we examined the overlap between the implicit processing of structured sequences, generated by a simple right-linear artificial unification grammar, with natural syntax related variability in the same subjects. Research investigating rule learning of potential linguistic relevance through artificial syntax often uses performance feedback and/or explicit instruction concerning the underlying rules. It is assumed that this approach ensures the right type of ''rule-following''because the rules are either explicitly provided to the subjects or explicitly discovered by the subjects during trial-and-error learning with feedback. In this work, we use a novel implicit preference classification task based on the structural mere exposure effect. Under conditions that in important respects are similar to those of natural language development (i. e., no explicit learning or teaching instruction, and no performance feedback), 32 subjects were exposed for 5 days to grammatical sequences during an immediate short-term memory task. On day 5, a preference classification test was administered, in which new sequences were presented. In addition, natural language data was acquired in the same subjects. Implicit preference classification was sensitive enough to show robust behavioral and fMRI effects. Preference classification of structured sequences activated Broca's region (BA 44/45) significantly, and was further activated by artificial syntactic violations. The effects related to artificial syntax in BA 44/45 were identical when we masked these with activity related to natural syntax processing. Moreover, the medial temporal lobe was deactivated during artificial syntax processing, consistent with the view that implicit processing does not rely on declarative memory mechanisms supported by the medial temporal lobe. In summary, we show that implicit acquisition of structured sequence knowledge results in the engagement of Broca's region during structured sequence processing. We conclude that Broca's region is a generic on-line sequence processor integrating information, in an incremental and recursive manner, independent of whether the sequences processed are structured by a natural or an artificial syntax.
  • Franke, B., Rijpkema, M., Arias Vasquez, A., Veltman, J. A., Brunner, H. G., Hagoort, P., & Fernandez, G. (2010). Genome-wide association study of regional brain volume suggests involvement of known psychiatry candidate genes, identified new candidates for psychiatric disorders and points to potential modes of their action. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Though most psychiatric disorders are highly heritable, it has been hard to identify genetic risk factors involved, which are most likely of small individual effect size. A possible way to aid identification of risk genes is the use of intermediate phenotypes. These are supposed to be closer to the biological substrate(s) of the disorder than psychiatric diagnoses, and therefore less genetically complex. Intermediate phenotypes can be defined e. g. at the level of brain function and of regional brain structure. Both are highly heritable, and regional brain structure is linked to brain function. Within the Brain Imaging Genetics (BIG) study at the Radboud University Nijmegen (Medical Centre) we performed a genome-wide association study (GWAS) in 1000 of the currently 1400 healthy study participants. For all BIG participants, structural MRI brain images were available. Gray and white matter volumes were determined by brain segmentation using SPM software. FSL-FIRST was used to assess volumes of specific brain structures. Genotyping was performed on Affymetrix 6.0 arrays. Results implicate known candidates from earlier GWAS and candidate gene studies in mental disorders in the regulation of regional brain structure. E. g. polymorphisms in CDH13, featuring among the top-findings of GWAS in disorders including ADHD, addiction and schizophrenia, were found associated with amygdala volume. The ADHD candidate gene SNAP25 was found associated with total brain volume. In conclusion, the use of intermediate phenotypes based on (subcortical) brain volumes may shed more light on pathways from genes to diseases, but can also be expected to facilitate gene identification in psychiatric disorders.
  • Hagoort, P. (2010). Beyond Broca, brain, and binding. Talk presented at Symposium Marta Kutas. Nijmegen. 2010-05-19 - 2010-05-20.
  • Hagoort, P. (2010). Beyond the Language given: Language processing from an embrained perspective. Talk presented at Sissa colloquim. Trieste, Italy. 2010-12-13.
  • Hagoort, P. (2010). Breintaal. Talk presented at Club of Spinoza Prize winners. Rijnsburg, The Netherlands. 2010-12-01.
  • Hagoort, P. (2010). De talige netwerken in ons brein. Talk presented at the Wetenschappelijke Vergadering en Algemene Ledenvergadering van de Nederlandse Vereniging voor Neurologie (NVN). Amsterdam, The Netherlands. 2010-11-04 - 2010-11-04.
  • Hagoort, P. (2010). Communication beyond the language given. Talk presented at International Neuropsychological Symposium. Ischia(Italy). 2010-06-22 - 2010-06-26.
  • Hagoort, P. (2010). [Organizing committee and session chair]. Second Annual Neurobiology of Language Meeting [NCL 2010]. San Diego, CA, 2010-11-11 - 2010-11-12.
  • Hagoort, P. (2010). In gesprek met ons brein. Talk presented at Paradisolezingen 2010. Amsterdam. 2010-03-28.
  • Hagoort, P. (2011). Language processing: A disembodied perspective [Keynote lecture]. Talk presented at The Workshop Embodied & Situated Language Processing [ESLP 2010]. Bielefeld, Germany. 2011-08-25 - 2011-08-27.
  • Hagoort, P. (2010). The science of human nature. Talk presented at Anthos Conference. Noordwijk, The Netherlands. 2010-01-08.
  • Hagoort, P., Segaert, K., Weber, K. M., De Lange, F. P., & Petersson, K. M. (2010). The suppression of repetition enhancement: A review. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Repetition suppression is generally accepted as the neural correlate of behavioural priming and is often used to selectively identify the neuronal representations associated with a stimulus. However, this does not explain the large number of repetition enhancement effects observed under very similar conditions. Based on a review of a large set of studies we propose several variables biasing repetition effects towards enhancement instead of suppression. On the one hand, there are stimulus variables which influence the direction of repetition effects: visibility, e. g. in the case of degraded stimuli perceptual learning occurs; novelty, e. g. in case of unfamiliar stimuli a novel network formation process occurs; and timing intervals, e. g. repetition effects are sensitive to stimulus onset asynchronies. On the other hand, repetition effects are not solely automatic processes, triggered by particular types or sequences of stimuli. The brain is continuously and actively filtering, attending to and interpreting the information provided by our senses. Consequently, internal state variables like attention, expectation and explicit memory modulate repetition effects towards enhancement versus suppression. Current models i.e. the accumulation, fatigue and sharpening models of repetition suppression have so far left out top-down factors and cannot or can only partially account for repetition enhancement effects. Instead we propose that models which incorporate both stimulus bottom-up and cognitive top-down factors are called for in order to better understand repetition effects. A good candidate is the predictive coding model in which sensory evidence is interpreted according to subjective biases and statistical accounts of past encounters.
  • Hagoort, P. (2010). The modular ghost in the recurrent connection machine: Where is the modular mind in a brain full of recurrent connectivity?. Talk presented at The Modularity of Mind: Revisions and Prospects. Heinrich-Heine University Düsseldorf, Germany. 2010-10-29.
  • Händel, B., Van Leeuwen, T. M., Jensen, O., & Hagoort, P. (2010). Lateralization of alpha oscillations in grapheme-color synaesthetes suggests altered color processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In grapheme-color synaesthesia, the percept of a particular grapheme causes additional experiences of color. To investigate this interesting integration of modalities, brain activity was recorded of 7 synaesthetes and matched controls using magnetoencephalography. Subjects had to report the color change of one of two letters presented left and right of a fixation cross. One of the letters was neutral (eliciting no color percept), the other one could either be neutral, colored or elicit synaesthesia (in synaesthetes). Additionally, the side of color change was validly or invalidly cued. As expected, in both subject groups 10 Hz alpha oscillations decreased contralateral to the attended side leading to an alpha lateralization. Additionally, controls as well as synaesthetes showed a stronger alpha reduction if the attended letter was colored indicating that color increased the attentional allocation. Interestingly, synaesthetes show the same effect of alpha decrease for synaesthetic color. While color on the attended side reduced alpha power in controls and synaesthetes, color on the unattended side only reduced alpha power in synaesthetes. Indeed, also psychophysical measures indicated changed processing in synaesthetes of unattended color stimuli. Only controls profited from the cue when attending the noncolor stimulus. Synaesthetes, however, performed worse if the noncolor stimulus was validly compared to invalidly cued. This means that synaesthetes performed better on the colored stimulus despite an invalid attentional cue. Changed alpha power lateralization and psychophysics due to unattended colorful input indicate that synaesthetes are more affected by color than controls. This might be due to increased attentional demand.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Dynamics of early word learning in nine-month-olds: An ERP study. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    What happens in the brain when infants are learning the meaning of words? Only a few studies (Torkildsen et al., 2008; Friedrich & Friederici, 2008) addressed this question, but they focused only on novel word learning, not on the acquisition of infant first words. From behavioral research we know that 12-month-olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on (Schafer, 2005). What happens in the brain during such a training? With event-related potentials, we studied the effect of training context on word comprehension. We manipulated the type/token ratio of the training context (one versus six exemplars). 24 normal-developing Dutch nine-month-olds (+/- 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15-months-olds. All trials consisted of a high-resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training-test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture-word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13-month-olds. German 12-month-olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9-month-olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infant word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word learning in nine-month-olds: Dynamics of picture-word priming. Talk presented at 8th Sepex conference / 1st Joint conference of the EPS and SEPEX. Granada, Spain. 2010-04.

    Abstract

    How do infants learn words? Most studies focus on novel word learning to address this question. Only a few studies concentrate on the stage when infants learn their first words. Schafer (2005) showed that 12‐month‐olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on. What happens in the brain during such a training? With event‐related potentials, we studied the effect of training context on word comprehension. 24 Normal‐developing Dutch nine‐month‐olds (± 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15‐months‐olds. All trials consisted of a high‐resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training‐test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. We manipulated the type/token ratio of the training context (one versus six exemplars). Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture‐word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13‐month‐olds. German 12‐month‐olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9‐month‐olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infants’ word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability and later language development: Insight from ERP's. Talk presented at Child Language Seminar 2010. London. 2010-06-24 - 2010-06-26.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability is related to later word processing skill. Poster presented at XVIIIth Biennial International Conference on Infant Studies, Baltimore, MD.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This reference is then mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task, we distinguished sets of regions sensitive to these two steps in speaking. Encoding reference involved the bilateral inferior parietal lobes (BA 39) and right inferior frontal gyrus (BA 45), suggesting a role in constructing a non-linguistic mental representation. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) were sensitive to both sense and reference processing. These regions thus seem to support semantic encoding, the process of mapping reference onto sense. Left inferior frontal gyrus (BA 45), left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed adaptation to sense, and therefore appear sensitive to the output of semantic encoding. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This referent is mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. So far, one study targeted semantic encoding in sentence production (Menenti, Segaert & Hagoort, submitted) and none dissected this process further. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. fMRI adaptation is a phenomenon whereby repeating a stimulus property changes the BOLD-response in regions sensitive to that property. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task we distinguished sets of areas sensitive to these steps in semantic encoding in speaking. Methods: In a picture description paradigm, the described situation (the reference) and the linguistic semantic structure (the sense) of subsequently produced sentences were independently repeated across trials. Participants described pictures depicting events involving transitive verbs such as hit, kiss, greet, and two actors colored in different colors with sentences such as ‘The red man greets the green woman’. In our factorial design, the same situation involving the same actors could subsequently be described by two different sentences (repeated reference, novel sense) or the same sentence could subsequently be used to describe two different situations (novel reference, repeated sense). For reference, we controlled for the repetition of actors. For sense, we controlled for the repetition of individual words. See figure 1 for design and stimuli. To correct for increased movement and susceptibility artifacts due to speech, we scanned using 3T-fMRI parallel-acquired inhomogeneity-desensitized fMRI (Poser, Versluis, Hoogduin et al. 2006). Five images were acquired per TR and combined based on local T2* (Buur, Poser and Norris 2009). Results: The behavioral data (response onset, response duration and total time to complete the responses) showed effects of both sense and reference. In the fMRI analyses we looked for areas sensitive to only sense, only reference, or showing a conjunction of both factors. Encoding reference involved the bilateral inferior parietal lobes (BA 39), which showed repetition suppression, and right inferior frontal gyrus (BA 45), which showed repetition enhancement. Left inferior frontal gyrus (BA 45) showed suppression to repetition of sense, while left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed enhancement. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) showed repetition suppression to both sense and reference processing (conjunction analysis with conjunction null). See figure 2 for the results (p<.05 FWE corrected for multiple comparisons at cluster-level, maps thresholded at p<.001 uncorrected voxel-level.) Conclusions: The input to semantic encoding is construction of a referent, a mental representation that the utterance is about. The bilateral temporo-parietal junctions are involved in this process as they show sensitivity to repetition of reference but not sense. RIFG shows enhancement and may therefore be involved in constructing a more comprehensive model spanning several utterances. Semantic encoding itself requires mapping of the reference onto the sense. This involves large parts of the language network: bilateral posterior temporal lobes and upper left inferior frontal gyrus were sensitive to both reference and sense. Finally, sense recruits left inferior frontal gyrus (BA 45). This area is sensitive to syntactic encoding (Bookheimer 2002), the next step in speaking. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies. References: Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic procesing', Annual review of neuroscience, vol. 25, pp. 151-188. Buur, P. (2009), 'A dual echo approach to removing motion artefacts in fMRI time series', Magnetic Resonance in Medicine, vol. 22, no. 5, pp. 551-560. Menenti, L. (submitted), 'The neuronal infrastructure of speaking'. Poser, B. (2006), 'BOLD contrast sensitivity enhancement and artifact reduction with multiecho EPI: parallel-acquired inhomogeneity desensitized fMRI', Magnetic Resonance in Medicine, vol. 55, pp. 1227-1235.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: Identification of the neural signature of a concept is a key challenge in cognitive neuroscience. In recent years, a number of studies have demonstrated the possibility to decode conceptual information from spatial patterns in functional MRI data (Hauk et al., 2008; Shinkareva et al., 2008). An important unresolved question is whether similar decoding performance can be attained using electrophysiological measurements. The development of EEG-based concept decoding algorithms is interesting from an applications perspective, because the high temporal resolution of the EEG allows pattern recognition in real-time. In this study we investigate the possibility to identify conceptual representations from event-related EEG on the basis of the presentation of an object in three different modalities: an object’s written name, it’s spoken name and it’s line drawing. Methods: Twenty-four native Dutch speakers participated in the study. They were presented concepts from three semantic categories: two relevant categories (animals, tools) and a task category. There were four concepts per category, all concepts were presented in three modalities: auditory, visual (line drawings) and textual (written Dutch words). Each item was repeated 80 times (relevant), or 16 times (task) in each modality. The text and picture stimuli were presented for 300 ms. The interval between stimuli had a random duration between 1000-1200 ms. Participants were instructed to respond upon appearance of items from the task category. Continuous EEG was registered using a 64-channel system. The data were divided into epochs of one second starting 300 ms before stimulus onset. We used the time domain representation of the signal as input to the classifier (linear support vector machine, Vapnik, 2000). The classifier was trained to identify which of two semantic categories (animal or tool) was presented to subject. Performance of the classifier was computed as the proportion of correctly classified trials. Significance of the classification outcome was computed using a binomial test (Burges, 1998). In the first analysis we classified the semantic category of stimuli from the entire dataset, with trials of all modalities equally presented. In the second analysis we classified trials within each modality separately. In the third analysis we compared classification performance for the real categories with the classification performance for pseudo-categories to investigate the role of perceptual features of presented objects without transparent contribution of conceptual information. The pseudo-categories were composed by arranging all the concepts into classes randomly in a way that each class contained exemplars of both categories. Results: In the first analysis we assessed the ability to discriminate patterns of EEG signals referring to the representation of animals versus tools across three tested modalities. Significant accuracy was achieved for nineteen out of twenty subjects. The highest achieved classification accuracy across modalities was 0.69 with a mean value of 0.61 over all 20 subjects. To check whether the performance of the classifier was consistent during the experimental session, we visualized the correctness of the classifier’s decisions over the time-course of the session. Fig 1 shows that the classifier identifies more accurately the trials correspond to the picture blocks than the trials of text and audio blocks (Fig.1). To further assess the modality-specific classification performance, we trained and tested the classifiers within each of the individual modalities separately (Fig. 2). For pictures, the highest classification accuracy reached over all subjects was 0.92, and classification was significant (p<0.001) for all 20 subjects with a mean value of 0.80. The classifier for the auditory modality performed significantly better than chance (p<0.001 and p<0.01) in 15 out of 20 subjects with a mean value of 0.60. The classifier for the orthographic modality performed significantly better than chance in 5 out of 20 subjects, with a mean value of 0.56. Comparison of the classification performance for real- and pseudo-categories revealed a high impact of the conceptually driven activity on the classifier’s performance (Fig 3). Mean accuracies of pseudo-category classification over all subjects were 0.56 for pictures, 0.56 for audio, and 0.55 for text. Significant (p<0.005) differences form the real-categories results were found for all pseudo-categories in the picture modality; for eight out of ten pseudo-categories in the auditory modality, and for one out of ten pseudo-categories in the orthographic modality. Conclusions: The results uncover that stable neural patterns induced by the presentation of stimuli of different categories can be identified by EEG. High classification performances were achieved for all subjects. The visual modality appeared to be much easier to classify than the other modalities. This indicates the existence of category-specific patterns in visual recognition of objects (Kiefer 2001; Liu et al., 2009). Currently we are working towards interpreting the patterns found during classification using Bayesian logistic regression. A considerable reduction of performance has been found when using pseudo-categories instead of the real categories. This indicated that the classifier has identified neural activity at the level of conceptual representations. Our results could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept encoding in the context of brain-computer interface applications. References: Burges, C. (1998), 'A tutorial on support vector machines for pattern recognition', Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121-167. Hauk, O. (2008), 'Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging', European Journal Neuroscience, vol. 27, no. 7, pp. 1856-66. Kiefer, M. (2001), 'Perceptual and semantic sources of category-specific effects: Event-Related potentials during picture and word categorization', Memory and Cognition, vol. 29, no. 1, pp. 100-16. Liu, H. (2009), 'Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex', Neuron, vol. 62, no. 2, pp. 281-90. Shinkareva, S. (2008), 'Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings', Plos One, vol. 3, no. 1, pp. e1394.
  • van Leeuwen, T. M., Den Ouden, H. E., & Hagoort, P. (2010). Bottom-up versus top-down: Effective connectivity reflects individual differences in grapheme-color synesthesia. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    In grapheme-color synesthesia, letters elicit a color. Neural theories propose that synesthesia is due to changes in connectivity between sensory areas. However, no studies on functional connectivity in synesthesia have been published to date. Here, we applied psycho-physiological interactions (PPI) and dynamic causal modeling (DCM) in fMRI to assess connectivity patterns in synesthesia. We tested whether synesthesia is mediated by bottom-up, feedforward connections from grapheme areas directly to perceptual color area V4, or by top-down feedback connections from the parietal cortex to V4. We took individual differences between synesthetes into account: 'projector'synesthetes experience their synesthetic color in a spatial location, while 'associators'only have a strong association of the color with the grapheme. We included 19 grapheme-color synesthetes (14 projectors, 5 associators) and located group effects of synesthesia in left superior parietal lobule (SPL) and right color area V4. With PPI, taking SPL as a seed region, we found an increase in functional coupling with visual areas (also V4), for the synesthesia condition. With PPI, however, we can not determine the direction of this functional coupling. Based on the GLM results, we specified 2 DCMs to test whether a bottom-up or a top-down model would provide a better explanation for synesthetic experiences. Bayesian Model Selection showed that overall, neither model was much more likely than the other (exceedance probability of 0.589). However, when the models were divided according to projector or associator group, BMS showed that the bottom-up, feedforward model had an exceedance probability of 0.98 for the projectors: it was strongly preferred for this group. The top-down, feedback model was preferred for the associator group (exceedance probability = 0.96). To our knowledge, we are the first to report empirical evidence of changes in functional and effective connectivity in synesthesia. Whether bottom-up or top-down mechanisms underlie synesthetic experiences has been a long-time debate: that different connectivity patterns can explain differential experiences of synesthesia may greatly improve our insight in the neural mechanisms of the phenomenon.
  • Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, spain.

    Abstract

    Introduction: When a 6-year-old girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. Besides linguistic content, the voice also carries information about a person's identity relevant for communication, such as idiosyncratic features related to the gender and age of the speaker (Campanella 2007). A previous ERP study investigated inter-individual differences in the cognitive processes that mediate the integration of social information in a linguistic context (Van den Brink submitted). Individuals with an empathizing-driven cognitive style showed larger ERP effects to mismatching information about the speaker than individuals who empathize to a lesser degree. The present ERP study tested individuals with Autism Spectrum Disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Methods: Participants. The ERP experiment was conducted with 20 Dutch adult males clinically diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women. Materials. Experimental materials consisted of 160 Dutch sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Translated examples of speaker identity (SI) incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an 'upper-class' accent. In addition, participants heard 48 sentences containing classic lexical semantic (LS) anomalies which are pure linguistic violations, known to elicit an N400, matched with semantically congruent sentences (e.g., "You wash your hands with horse and water" vs. "You wash your hands with soap and water"). Procedure. Participants listened to 352 sentences, spoken by 21 different people. They were asked to indicate after each sentence how odd they thought the sentence was, using a 5-point-scale ranging from "perfectly normal" to "extremely odd". Participants filled out Dutch translations of the Autism and Empathizing Questionnaires (AQ: Baron-Cohen 2001; EQ: Baron-Cohen 2004). EEG recording. EEG was recorded from 28 electrodes referenced to the left mastoid. Electrode impedances were below 5 kOhm. Signals were recorded using a 200 Hz low-pass filter, a time constant of 10 sec., and a 500 Hz sampling frequency. After off-line re-referencing of the EEG signals to the mean of the left and right mastoid, they were filtered with a 30 Hz low-pass filter. Segments ranging from 200 ms before to 1500 ms after the acoustic onset of the critical word were baseline-corrected. Segments containing artifacts were rejected (12.7%). Results: Behavioral results. EQ scores differed significantly between groups (p < .001), with average scores of 22.1 for ASD, 40.6 for men, and 52.1 for women. Statistical analysis of the rating data (see Figure 1) consisted of ANOVAs with the within-subject factors Manipulation (LS, SI) and Congruity (congruent, incongruent), and the between-subject factor Group (ASD, men, women). A significant inter­action between Manipulation and Group (p < .01) indicated that the participant groups rated the items differently. For the LS items, a main effect of Congruity (p < .001), but no interaction of Congruity by Group (F < 1) was obtained. For the SI items a main effect of Congruity (p < .001), as well as an interaction of Congruity by Group (p < .01) was found. The ASD group rated the SI violations as less odd than the male and female participant group (2.9 versus 3.4 and 3.7, respectively). In addition, significant positive correlations with EQ score were found for SI effect size (see Figure 2) as well as SI violations (both p < .01). ERP results. Figure 3 displays the ERP waveforms for the three participant groups. Mean amplitude values in the N400 and Late Positive Component latency ranges (300-600 and 700-1000 ms) from 7 centro-parietal electrodes did not reveal a Congruity by Group interaction. However, a significant correlation was found between the size of the SI effect in the N400 latency window and EQ score (p < .01), with individuals who scored high on EQ showing a larger positive effect. Participants were subdivided into three groups based on EQ score; low empathizers (M = 20; 16 ASD, 2 men), medium empathizers (M = 37; 4 ASD, 12 men, 2 women), and high empathizers (M = 53; 8 men, 10 women). See Figure 4 for the SI difference waveforms for the three EQ groups. Individuals who empathize to a larger degree show an earlier and significantly larger positive effect (p < .05), related to decision making than low empathizers (i.e. mostly individuals with ASD). Conclusions: Our results evidently show that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results reveal that individuals who scored low on the EQ had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller positive ERP effect, which has been related to decision-making processing (Nieuwenhuis 2005). We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, as indexed by the behavioral and electrophysiological results for the lexical semantic manipulation. However, differences in onset latency, as well as size of the late positive effect in the speaker identity manipulation, suggest that they do have difficulties with assigning value to social information in language processing. References: Baron-Cohen, S. (2001), 'The Autism spectrum Quotient (AQ): Evidence from Asperger Syndrome/High Functioning Autism, males and females, scientists and mathematicians', Journal of Autism and Developmental Disorders, vol. 31, pp. 5-17. Baron-Cohen, S. (2004), 'The Empathy Quotient: An investigation of adults with Asperger Syndrome or High Functioning Autism, and normal sex differences', Journal of Autism and Developmental Disorders, vol. 34, pp. 163-175. Campanella, S. (2007), 'Integrating face and voice in person perception', Trends in Cognitive Sciences, vol. 11, no. 12, pp. 535-543. Nieuwenhuis, S. (2005), 'Decision making, the P3, and the locus coeruleus-norepinephrine system', Psychological Bulletin, vol. 131, no. 4, pp. 510-532. Van den Brink, D. (submitted), 'Empathy matters: ERP evidence for inter-individual differences in social language processing'.
  • Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    When a young girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. A previous ERP study revealed that individual differences in empathizing affects integration of this type of extra-linguistic, social, information in a linguistic context. The present ERP study tested individuals with autism spectrum disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Twenty adult males diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women participated. Experimental materials consisted of sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Examples of speaker identity incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an "upper-class" accent. In addition, we included a pure linguistic, lexical semantic manipulation (e. g., "You wash your hands with soap/horse and water"). Participants indicated after each spoken sentence, using a five-point scale, how odd they thought the sentence was, while their EEG was recorded. They also filled out a questionnaire on their empathizing ability. Our results reveal that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results show that individuals who scored low on empathizing ability had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller, positive ERP effect, which can be related to decision-making processes. We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, but that they do have difficulties with assigning value to social information in language processing.
  • Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Beta oscillation relates with the Event Related Field during language processing. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction: MEG has the advantage of both a high temporal and a spatial resolution in measuring neural activity. The event-related field (ERF) have been extensively explored in psycholinguistic research. For example, the N400m was found to be sensitive to semantic violations (Helenius, 2002). On the other hand, induced oscillatory responses of the EEG and MEG during langauge comprehension are less commonly investigated. Oscillatory dynamics have been shown to also contain relevant information, which can be measured amongst others by time-frequency (TF) analyses of power and /or coherence changes (Bastiaansen & Hagoort, 2006; Weiss et al., 2003). In the present study we explicitly investigate whether there is a (signal-analytic) relationship between MEG oscillatory dynamics (notably power changes) and the N400m. Methods: There were two types of auditory sentences, in which the last words were either semantically congruent (C) or incongruent (IC) with respect to the sentence context. MEG signals were recorded with a 151 sensor CTF Omega System, and MRIs were obtained with a 1.5 T Siemens system. We segmented the MEG data into trials starting 1 s before and ending 2 s after the onset of the critical words. The ERFs were calculated by averaging over trials separately for two conditions. The time frequency representations (TFRs) of the single trials were calculated using a Wavelet technique, after which the TFRs were averaged over trials for both conditions. A cluster-based random permutation test (Maris & Oostenveld, 2007) was used to assess the significance of the difference between the two conditions, both for the ERFs and the TFRs. In order to characterize the relationship between beta power (see results) and N400m, we performed a linear regression analysis between beta power and N400m for the sensors that showed significant differences in ERFs or TFRs between the two conditions. In the end, a beamforming approach [Dynamic Imaging of Coherent Sources (DICS)] was applied to identify the sources of the beta power changes. Results: The ERF analysis showed that approximately between 200ms and 700ms after the onset of the critical words, the IC condition elicited larger amplitudes than the C condition over bilateral temporal areas, with a clear left hemisphere preponderance (Fig. 1A). Statistical analysis revealed significant differences over the left temporal area (Fig. 1B). In a similar time window (200 - 700ms), a beta power suppression (16 - 19 Hz) was found only for the IC condition, but not for the C condition (Fig. 2A). The statistical analysis of the beta power difference between the two conditions revealed a significantly lower beta power for the IC than C condition over left temporal cortex (Fig. 2B). The comparable topographies for N400m and beta differences suggest a relationship between these two effects. In order to evaluate this relationship, we performed a linear regression between beta power and N400m for both IC and C conditions in both the post-stimulus time window (200 - 700ms) and the pre-stimulus time window (-600 - -200ms). In the time window of 200 - 700ms, we found a positive linear regression between beta power and N400m for the IC condition (R = .32, p = .03) but not for the C condition (p = .83). For the IC condition, we found that the lower the beta power, the lower the N400m amplitude. In the time window of -600 - -200ms, the C condition showed a positive linear regression between beta power and N400m (R = .27, p = .06), but the IC condition did not show this (p = .74). The source modeling analysis allows us to estimate the generators of the beta suppression for the IC relative to C condition. The source of the beta suppression (around 18 Hz) within 200 - 700 ms was identified in the left inferior frontal gyrus (LIFG, BA 47) (Fig. 3). Conclusions: The ERF difference between the two conditions is consistent with previous MEG studies. However, it is the first time that the beta power suppression is related with the amplitude of the N400m. When the input is highly predictable (C condition), the lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while the low predictability (IC condition) of the input produced an association between the N400m and the beta power in the post-stimulus interval. Moreover, the generator of the beta suppression was identified in the LIFG, which has been related to semantic unification (Hagoort, 2005). Together with other studies on the role of beta oscillations across a range of cognitive functions (Pfurtscheller, 1996; Weiss, 2005; Hirata, 2007; Bastiaansen, 2009), we propose that beta oscillations generally reflect the engagement of brain networks: a lower beta power indicates a higher engagement for information processing. References: Bastiaansen, M. (2009), ''Oscillatory brain dynamics during language comprehension', Event-Related Dynamics of Brain Oscillations, vol. 159, pp. 182-196. Bastiaansen, M. (2009), ''Syntactic Unification Operations Are Reflected in Oscillatory Dynamics during On-line Sentence Comprehension', Journal of Cognitive Neuroscience, vol. doi: 10.1162/jocn.2009.21283, pp. 1-15. Hagoort, P. . (2005), 'On Broca, brain, and binding: a new framework', Trends in Cognitive Sciences, vol. 9, no. 9, pp. 416-423. Helenius, P. (2002), ''Abnormal auditory cortical activation in dyslexia 100 msec after speech onset', Journal of Cognicition Neuroscience, vol. 14, pp. 603-617. Hirata, M. (2007), 'Effects of the emotional connotations in words on the frontal areas — a spatially filtered MEG study', NeuroImag, vol. 35, pp. 420–429. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', Journal of Neuroscience Methods, vol. 164(1), no. 15, pp. 177-190. Pfurtscheller, G. (1996), 'Post-movement beta synchronization. A correlate of an idling motor area?', Electroencephalography and Clinical Neurophysiology, vol. 98, pp. 281–293. Weiss, S. (2003), 'The contribution of EEG coherence to the investigation of language', Brain and language, vol. 85, pp. 325-343. Weiss, S. (2005), 'Increased neuronal communication accompanying sentence comprehension', International Journal of Psychophysiology, vol. 57, pp. 129-141.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2010). "Chomsky illusion"? ERP evidence for the influence of information structure on syntactic processing. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
  • Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Modulation of the beta rhythm during language comprehension. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Event-related potentials and fields have been extensively explored in psycholinguistic research. However, relevant information might also be contained in induced oscillatory brain responses. We used magnetoencephalograhy (MEG) to explore oscillatory responses elicited by semantically incongruent words in a classical sentence comprehension paradigm. Sentences in which the last word was either semantically congruent or incongruent with respect to the sentence context were presented auditorily. Consistent with previous studies a stronger N400m component was observed over left temporal areas in response to incongruent compared to congruent sentence endings. At the same time, the analysis of oscillatory activity showed a larger beta power decrease (16-19 Hz) for the incongruent than congruent condition in the N400m time window (200-700ms), also over the left temporal area. The relationship between the beta decrease and the N400m was confirmed by a linear regression analysis. Moreover, using a beamforming approach we localized the sources of the beta decrease to the left prefrontal cortex (BA47). We propose that the beta oscillation reflects the engagement of brain networks. A lower beta power indicates a higher engagement for information processing. When the input is highly predictable (congruent condition), a lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while a low predictability (incongruent condition) of the input shows a relationship between the N400m and the beta power in the post-stimulus interval, which indicates the engagement of the brain networks for integrating the unexpected information. This 'engagement'hypothesis is also compatible with reported beta effects in other cognitive domains.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    Although language is an effective means of communication, it is unclear how linguistic and communicative abilities relate to each other. In communicative message generation, perspective taking or mentalizing are involved. Some researchers have argued that mentalizing depends on language. In this study, we directly tested the relationship between cerebral structures supporting communicative message generation and language abilities. Healthy participants were scanned with fMRI while they participated in a verbal communication paradigm in which we independently manipulated the communicative intent and linguistic difficulty of message generation. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings indicate that communicative and linguistic abilities rely on different neuro-cognitive architectures. We suggest that the generation of utterances with communicative intent relies on our ability to deal with mental states of other people ("mentalizing"), which seems distinct from language.
  • Zhu, Z., Wang, S., Hagoort, P., Feng, G., Chen, H.-C., & Bastiaansen, M. C. M. (2010). Inferior frontal gyrus is activated during sentence-level semantic unification in both explicit and implicit reading tasks. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
  • Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
  • Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.

Share this page