Peter Hagoort

Presentations

Displaying 1 - 91 of 91
  • Arana, S., Rommers, L., Hagoort, P., Snijders, T. M., & Kösem, A. (2016). The role of entrained oscillations during foreign language listening. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
  • Belavina Kuerten, A., Mota, M., Segaert, K., & Hagoort, P. (2016). Syntactic priming effects in dyslexic children: A study in Brazilian Portuguese. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Dyslexia is a learning disorder caused primarily by a phonological processing deficit. So far, few studies have examined whether dyslexia deficits extend to syntactic processing. We investigated how dyslexic children process syntactic structures. In a self-paced reading syntactic priming paradigm, the passive voice was repeated in mini-blocks of five sentences. These were mixed with an equal number of filler mini-blocks (actives, intransitives); the verb was repeated within all mini-blocks. The data of 20 dyslexic children (Mean(age)=12,8), native speakers of Brazilian Portuguese, were compared to that of 25 non-dyslexic children (Mean(age)=10,4 years). A repeated-measures ANOVA on reading times for the verb revealed a significant sentence repetition (p<.001) and group by sentence repetition effect (p<.001). Dyslexics demonstrated priming effects between all consecutive passive voice repetitions (all p<.05), whereas reading times for controls differed only between the first and second passive (p<.001). For active sentences, dyslexics showed priming effects only between the first and second sentences (p<.05) while controls did not show any significant effect, suggesting that the effects for passives are not solely due to the verb being repeated, but at least in part due to the repeated syntactic structure. These findings thus reveal syntactic processing differences between dyslexic and non-dyslexic children.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    In certain situations, human listeners have more difficulty in understanding speech in a multi-talker environment than in the presence of non-intelligible noise. The costs of speech-in-speech masking have been attributed to informational masking, i.e. to the competing processing of the target and the distractor speech’s information. It remains unclear what kind of information is competing, as intelligible speech and unintelligible speech-like signals (e.g. reversed, noise-vocoded, and foreign speech) differ both in linguistic content and in acoustic information. Thus, intelligible speech could be a stronger distractor than unintelligible speech because it presents closer acoustic information to the target speech, or because it carries competing linguistic information. In this study, we intended to isolate the linguistic component of speech-in-speech masking and we tested its influence on the comprehension of target speech. To do so, 24 participants performed a dichotic listening task in which the interfering stimuli consisted of 4-band noise-vocoded sentences that could become intelligible through training. The experiment included three steps: first, the participants were instructed to report the clear target speech from a mixture of one clear speech channel and one unintelligible noise-vocoded speech channel; second, they were trained on the interfering noise-vocoded sentences so that they became intelligible; third, they performed the dichotic listening task again. Crucially, before and after training, the distractor speech had the same acoustic features but not the same linguistic information. We thus predicted that the distracting noise-vocoded signal would interfere more with target speech comprehension after training than before training. To control for practice/fatigue effects, we used additional 2-band noise-vocoded sentences, which participants were not trained on, as interfering signals in the dichotic listening tasks. We expected that performance on these trials would not change after training, or would change less than that on trials with trained 4-band noise-vocoded sentences. Performance was measured under three SNR conditions: 0, -3, and -6 dB. The behavioral results are consistent with our predictions. The 4-band noise-vocoded signal interfered more with the comprehension of target speech after training (i.e. when it was intelligible) compared to before training (i.e. when it was unintelligible), but only at SNR -3dB. Crucially, the comprehension of the target speech did not change after training when the interfering signals consisted of unintelligible 2-band noise-vocoded speech sounds, ruling out a fatigue effect. In line with previous studies, the present results show that intelligible distractors interfere more with the processing of target speech. These findings further suggest that speech-in-speech interference originates, to a certain extent, from the parallel processing of competing linguistic content. A magnetoencephalography study with the same design is currently being performed, to specifically investigate the neural origins of informational masking.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the 8th Speech in Noise Workshop (SpiN), Groningen, The Netherlands.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2016). A spiking recurrent network for semantic processing. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Franken, M. K., Schoffelen, J.-M., McQueen, J. M., Acheson, D. J., Hagoort, P., & Eisner, F. (2016). Neural correlates of auditory feedback processing during speech production. Poster presented at New Sounds 2016: 8th International Conference on Second-Language Speech, Aarhus, Denmark.

    Abstract

    An important aspect of L2 speech learning is the interaction between speech production and perception. One way to study this interaction is to provide speakers with altered auditory feedback to investigate how unexpected auditory feedback affects subsequent speech production. Although it is generally well established that speakers on average compensate for auditory feedback perturbations, even when unaware of the manipulation, the neural correlates of responses to perturbed auditory feedback are not well understood. In the present study, we provided speakers with auditory feedback that was intermittently pitch-shifted, while we measured the speaker’s neural activity using magneto-encephalography (MEG). Participants were instructed to vocalize the Dutch vowel /e/ while they tried to match the pitch of a short tone. During vocalization, participants received auditory feedback through headphones. In half of the trials, the pitch in the feedback signal was shifted by -25 cents, starting at a jittered delay after speech onset and lasting for 500ms. Trials with perturbed feedback and control trials (with normal feedback) were in random order. Post-experiment questionnaires showed that none of the participants was aware of the pitch manipulation. Behaviorally, the results show that participants on average compensated for the auditory feedback by shifting the pitch of their speech in the opposite (upward) direction. This suggests that even though participants were not aware of the pitch shift, they automatically compensate for the unexpected feedback signal. The MEG results show a right-lateralized response to both onset and offset of the pitch perturbation during speaking. We suggest this response relates to detection of the mismatch between the predicted and perceived feedback signals, which could subsequently drive behavioral adjustments. These results are in line with recent models of speech motor control and provide further insights into the neural correlates of speech production and speech feedback processing.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Hagoort, P. (2016). Beyond the core networks of language. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    Speakers and listeners do more than exchanging propositional content. They try to get things done with their utterances. For speakers this requires planning of utterances with knowledge about the listener in mind, whereas listeners need to make inferences that go beyond simulating sensorimotor aspects of propositional content. For example, the statement "It is hot in here" will usually not be answered with a statement of the kind "Yes, indeed it is 32 degrees Celsius", but rather with the answer "I will open the window", since the listener infers the speaker's intention behind her statement. I will discuss a series of studies that identify the network of brain regions involved in audience design and inferring speaker meaning. Likewise for indirect replies that require conversational implicatures, as in A: "Did you like my talk?" to which B replies: "It is hard to give a good presentation." I will show that in these cases the core language network needs to be extended with brain systems providing the necessary inferential machinery
  • Hagoort, P. (2016). De magie van het talige brein. Talk presented at the Akademie van Kunsten. Amsterdam, The Netherlands. 2016-01.
  • Hagoort, P. (2016). Dutch science on the move. Talk presented at the Donders Institute for Brain, Cognition and Behaviour. Nijmegen, The Netherlands. 2016-06.
  • Hagoort, P. (2016). Cognitive enhancement: A few observations and remarks. Talk presented at the LUX. Nijmegen, The Netherlands. 2016-02.
  • Hagoort, P. (2016). Language from an embrained perspective: It is hard to give a good presentation. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). Healthy Brain. Talk presented at the Meeting Ministry of Economic Affairs. Papendal, The Netherlands. 2016-09.
  • Hagoort, P. (2016). Healthy brain initiative. Talk presented at the Radboud University. Nijmegen, the Netherlands. 2016-06.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Het talige brein. Talk presented at the Studiedag Regionaal Instituut Dyslexie (RID). Arnhem, the Netherlands. 2016-11-19.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Neuroanatomy of language [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Hagoort, P. (2016). The neurobiology of morphological processing. Talk presented at the MPI Workshop Morphology in the Parallel Architecture. Nijmegen, The Netherlands. 2016-03-18.
  • Hagoort, P. (2016). Wetenschap is emotie. Talk presented at the opening InScience Filmfestival. Nijmegen, The Netherlands. 2016-11-02.
  • Hagoort, P. (2016). The toolkit of cognitive neuroscience. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). The toolkit of cognitive neuroscience. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). Towards team science. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). How social opinion influences syntactic processing - an investigation using Virtual Reality. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Adapting your grammatical preferences to match that of your interlocutor, a phenomenon known as structural priming, can be influenced by the social opinion you have of your interlocutor. However, the direction and reliability of this effect is unclear as different studies have reported seemingly contrary results. When investigating something as abstract as social opinion, there are numerous differences between the studies that could be causing the differing results. We have operationalized social opinion as the ratings of favorability for a wide range of different avatars in a virtual reality study. This way we can accurately determine how the strength of the structural priming effect changes with differing social opinions. . Our results show an inverted U-shaped curve in passive structure repetition as a function of favorability: the participants showed the largest priming effects for the avatar with average favorability ratings, with a decrease when interacting with the least- or most-favorable avatars. This result suggests that the relationship between social opinion and priming magnitude may not be a linear one, contrary to what the literature has been assuming. Instead there is 'happy medium' which evokes the highest priming effect and on either side of this ideal is a decrease in priming
  • Heyselaar, E., Segaert, K., Walvoort, S., Kessels, R., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff?s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Our control group didn't show a priming tendency, presumably due to cognitive interference between declarative and non-declarative memory systems. To verify the absence of the effect in the controls, we ran an independent group of 54 participants on the same paradigm that also showed no priming effect. The results are further discussed in relation to amnesia, aging and compensatory mechanisms
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff’s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. We used the traffic-light design for the syntactic priming task: the actors in the prime trial photos were colour-coded and the participants were instructed to name the 'green' actor before the 'red' actor in the picture. This way we can control which syntactic structure the participant uses to describe the photo. For target trials, the photos were grey-scale so there was no bias towards one structure over another. This set-up allows us to ensure the primes are properly encoded. In addition to the priming task, we also measured declarative memory, implicit learning ability, and verbal IQ from all participants. Memory tests supported the claim that our 17 patients did have a severely impaired declarative memory system, yet a functional implicit/procedural one. The control group showed no deficit in any of the control measurements. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Unexpectedly, our healthy control group did not show a priming tendency. In order to verify the absence of a syntactic priming effect in the healthy controls, we ran an independent group of 54 participants with the exact same paradigm. The results replicated the earlier findings such that there was no priming effect compared to baseline. This lack of priming ability in the healthy older population could be due to cognitive interference between declarative and non-declarative memory systems, which increases as we get older (mean age of the control group is 62 years).
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Visual attention influences language processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Research into the interaction between attention and language has mainly focused on how language
    influences attention. But how does attention influence language
    ?
    Considering we are constantly
    bombarded with attention grabbing stimuli unrelated to the conversation we are conducting, this is
    certainly an interesting topic of investigation. In this study we aim to uncover how limiting attentional
    resources influences
    language behaviour. We focus on syntactic priming: a task which captures how
    participants adapt their syntactic choices to their partner. Participants simultaneously conducted a
    motion
    -
    object tracking (MOT) task, a task commonly used to tax attentional re
    sources. We thus
    measured participants

    ability to process syntax while their attention is not
    -
    , slightly
    -
    , or overly
    -
    taxed.
    We observed an inverted U
    -
    shaped curve on priming magnitude when conducting the MOT task
    concurrently with prime sentences, but no
    effect when conducted with target sentences. Our results
    illustrate how, during the prime phase of the syntactic priming task, attention differentially affects
    syntactic processing whereas during the target phase there is no effect of attention on language
    behaviour. We explain these results in terms of the implicit learning necessary to prime and how
    different levels of attention taxation can either impair or enhance the way language is encoded
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension.
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Lockwood, G., Drijvers, L., Hagoort, P., & Dingemanse, M. (2016). In search of the kiki-bouba effect. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The kiki-bouba effect, where people map round shapes onto round sounds (such as [b] and [o]) and spiky shapes onto “spiky” sounds (such as [i] and [k]), is the most famous example of sound symbolism. Many behavioural variations have been reported since Köhler’s (1929) original experiments. These studies examine orthography (Cuskley, Simner, & Kirby, 2015), literacy (Bremner et al., 2013), and developmental disorders (Drijvers, Zaadnoordijk, & Dingemanse, 2015; Occelli, Esposito, Venuti, Arduino, & Zampini, 2013). Some studies have suggested that the cross-modal associations between linguistic sound and physical form in the kiki-bouba effect are quasi-synaesthetic (Maurer, Pathman, & Mondloch, 2006; Ramachandran & Hubbard, 2001). However, there is a surprising lack of neuroimaging data in the literature that explain how these cross-modal associations occur (with the exceptions of Kovic et al. (2010)and Asano et al. (2015)). We presented 24 participants with randomly generated spiky or round figures and 16 synthesised, reduplicated CVCV (vowels: [i] and [o], consonants: [f], [v], [t], [d], [s], [z], [k], and [g]) nonwords based on Cuskley et al. (2015). This resulted in 16 nonwords across four conditions: full match, vowel match, consonant match, and full mismatch. Participants were asked to rate on a scale of 1 to 7 how well the nonword fit the shape it was presented with. EEG was recorded throughout, with epochs timelocked to the auditory onset of the nonword. There were significant behavioural effects of condition (p<0.0001). Bonferroni t-tests show participants rated full match more highly than full mismatch nonwords. However, there was no reflection of this behavioural effect in the ERP waveforms. One possible reason for the absence of an ERP effect is that this effect may jitter over a broad latency range. Currently oscillatory effects are being analysed, since these are less dependent on precise time-locking to the triggering events.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized size-sound sound symbolism. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Schoot, L., Stolk, A., Hagoort, P., Garrod, S., Segaert, K., & Menenti, L. (2016). Finding your way in the zoo: How situation model alignment affects interpersonal neural coupling. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION: We investigated how speaker-listener alignment at the level of the situation model is reflected in inter-subject correlations in temporal and spatial patterns of brain activity, also known as between-brain neural coupling (Stephens et al., 2010). We manipulated the complexity of the situation models that needed to be communicated (simple vs complex situation model) to investigate whether this affects neural coupling between speaker and listener. Furthermore, we investigated whether the degree to which alignment was successful was positively related to the degree of between-brain coupling. METHOD: We measured neural coupling (using fMRI) between speakers describing abstract zoo maps, and listeners interpreting those descriptions. Each speaker described both a ‘simple’ map, a 6x6 grid including five animal locations, and a ‘complex’ map, an 8x8 grid including 7 animal locations, from memory, and with the order of map description randomized across speakers. Audio-recordings of the speakers’ utterances were then replayed to the listeners, who had to reconstruct the zoo maps on the basis of their speakers’ descriptions. On the group level, we used a GLM approach to model between-brain neural coupling as a function of condition (simple vs complex map). Communicative success, i.e. map reproduction accuracy, was added as a covariate. RESULTS: Whole brain analyses revealed a positive relationship between communicative success and the strength of speaker-listener neural coupling in the left inferior parietal cortex. That is, the more successful listeners were in reconstructing the map based on what their partner described, the stronger the correlation between that speaker and listener's BOLD signals in that area. Furthermore, within the left inferior parietal cortex, pairs in the complex situation model condition showed stronger between-brain neural coupling than pairs in the simple situation model condition. DISCUSSION: This is the first two-brain study to explore the effects of complexity of the communicated situation model and the degree of communicative success on (language driven) between-brain neural coupling. Interestingly, our effects were located in the inferior parietal cortex, previously associated with visuospatial imagery. This process likely plays a role in our task in which the communicated situation models had a strong visuospatial component. Given that there was more coupling the more situation models were successfully aligned (i.e. map reproduction accuracy), it was surprising that we found stronger coupling in the complex than the simple situation model condition. We plan in ROI analyses in primary auditory, core language, and discourse processing regions. The present findings open the way for exploring the interaction between situation models and linguistic computations during communication.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Maybe syntactic alignment is not affected by social goals?. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Although it is suggested that linguistic alignment can be influenced by speakers' relationship with their listener, previous studies provide inconsistent results. We tested whether speakers' desire to be liked affects syntactic alignment, and simultaneously assessed whether alignment affects perceived likeability. Primed participants (PPs) were therefore primed by another naive participant (Evaluator). PP and Evaluator took turns describing photographs with active/passive sentences. Unknown to PP, we controlled Evaluator's syntax by having them read out sentences. PPs' desire to be liked was manipulated by assigning pairs to a Control (secret evaluation by Evaluator), Evaluation (PPs were aware of evaluation), or Directed Evaluation (PPs knew about the evaluation and were instructed to make a positive impression) condition. PPs showed significant syntactic alignment (more passives produced after passive primes). However, there was no interaction with condition: PPs did not align more in the (Directed) Evaluation than in the Control condition. Our results thus do not support the conclusion that speakers' desire to be liked affects syntactic alignment. Furthermore, there was no reliable relationship between syntactic alignment and how likeable PPs appeared to their Evaluator: there was a negative effect in the Control and Evaluation conditions, but no relationship in the Directed Evaluation condition.
  • Sharoh, D., van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. G. (2016). Investigation of depth-dependent BOLD during language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Neocortex is known to be histologically organized with respect to depth, and neuronal connections across cortical layers form part of the brain's functional organization[1]. Efferent (outgoing) and afferent (incoming) inter-regional connections are found to originate and terminate at different depths, and this structure relates to the internal/external origin of neuronal activity. Specifically, efferent, inter-regional connections are associated with internally directed, top-down activity; afferent inter-regional connections are associated with bottom-up activity originating from external stimulation. The contribution of top-down and bottom-up neuronal activity to the BOLD signal can perhaps be inferred from depth-related fluctuations in BOLD. By dissociating top-down from bottom-up effects in fMRI, investigators could observe the relative contribution of internally and externally generated activity to the BOLD signal, and potentially test hypotheses regarding the directionality of BOLD connectivity. Previous investigation of depth-dependent BOLD has focused on human visual cortex[2]. In the present work, we have designed an experiment to serve as a proof of principle that (1) depth-dependent BOLD can be measured in higher cortical areas during a language processing task, and (2) that differences in the relative contribution of the BOLD signal at discrete depths, to the total BOLD signal, vary as a function of experimental condition. Data were collected on the Siemens 7T scanner at the Hahn Institute in Essen, Germany. Submillimeter (0.8mm3), T1-weighted data were acquired using MP2RAGE, along with near whole-brain, submillimeter (0.9x0.9x0.943mm x112 slices) 3D-EPI task data. The field of view fully covered bilateral temporal and fusiform regions, but excluded superior brain areas on the order of several centimeters. Participants were presented with an event-related paradigm involving the presentation of words, pseudowords and nonwords in visual and auditory modalities. Only the visual modality is discussed here. Cortical segmentation was performed using FreeSurfer's surface-pipeline. We parcellated the gray matter volume into discrete depths, and the analysis of depth-dependent BOLD was performed with the Laminar Analysis Toolbox (van Mourik). Further analysis was performed using FreeSurfer, AFNI and in-house MATLAB code. Regions included in the depth-dependent analysis were determined by first-level analysis. We have presently collected data from 10 participants. 4 were excluded due to equipment malfunction. In the first-level analysis (volume registration, smoothing, GLM, and significance testing), we observe fusiform activation for Realword>Nonword and Pseudoword>Nonword contrasts. These contrasts additionally show activation along middle temporal gyrus. The depth-dependent analysis was performed on fusiform clusters generated during the first-level analysis. These clusters appeared to show depth-dependent signal differences as a function of experimental condition. We suspect these differences may be related to layer-specific activation and reflect the relative contribution of top-down and bottom-up activity in the observed signal. These are preliminary results, and part of an ongoing effort to establish novel, depth-dependent analysis techniques in higher cortical areas and within the language domain. Future analysis will investigate the nature of the depth-dependent differences and the connectivity profiles of depth-dependent variation among distal cortical regions.[1]DouglasR.J.&MartinK.A.C.(2004).Neuronal Circuits of the Neocortex.Annual Review of Neuroscience,27,419-551.[2]Kok,P.,et al.(2016).Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.Current Biology,26,371-376.
  • Tan, Y., Acheson, D. J., & Hagoort, P. (2016). Moving beyond single words: Dissociating levels of linguistic representation in short-term memory (STM). Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    This study assessed the role of semantic, phonological, and grammatical levels of representation in short-term list recall through a 2 (meaningfulness) × 2 (phonological similarity) ×2 (grammaticality) manipulation. Dutch subjects (Experiment 1-2) and English subjects (Experiment 3-4) and seven aphasic patients (Experiment 5) were required to recall lists consisting of adjective-noun word-pairs. Within each list, meaningfulness was manipulated by pairing adjectives and nouns in a meaningful or non-meaningful way; phonological similarity was manipulated through the degree of phonological overlap between words; grammaticality was manipulated through the order of the adjective and noun within each word pair in English (e.g., “salty mea”´ vs. “meat salty”) and through morphological agreement in Dutch. Overall, subjects showed better recall for words in the meaningful, phonologically-dissimilar, and grammatical conditions. Moreover, by relating these main effects to subjects' phonological and semantic STM capacity, we found that subjects with better phonological STM were less affected by the meaningfulness manipulation, while subjects with better semantic STM were less affected by the phonological manipulations. These results demonstrated that there are multiple routes to group information in STM via the combinatorial constraints afforded by language, and subjects might benefit from additional cues when memory load is high in certain level(s).
  • Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N., Kempen, G., Petersson, K. M., & Hagoort, P. (2016). Dynamics of supramodal unification processes during sentence comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    It is generally assumed that structure building processes in the spoken and written modalities are subserved by modality-independent lexical, morphological, grammatical, and conceptual processes. We present a large-scale neuroimaging study (N=204) on whether the unification of sentence structure is supramodal in this sense, testing if observations replicate across written and spoken sentence materials. The activity in the unification network should increase when it is presented with a challenging sentence structure, irrespective of the input modality. We build on the well-established findings that multiple non-local dependencies, overlapping in time, are challenging and that language users disprefer left- over right-branching sentence structures in written and spoken language, at least in the context of mainly right-branching languages such as English and Dutch. We thus focused our study with Dutch participants on a left-branching processing complexity measure. Supramodal effects of left-branching complexity were observed in a left-lateralized perisylvian network. The left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching processing complexity. The left anterior middle temporal gyrus (LaMTG) and left inferior parietal lobe (LIPL) were also significant, although less specifically. The LaMTG was increasingly active also for sentences with increasing right-branching processing complexity. A direct comparison between left- and right-branching processing complexity yielded activity in an LIFG ROI for left > right-branching complexity, while the right > left contrast showed no activation. Using a linear contrast testing for increases in the left-branching complexity effect over the sentence, we found significant activity in LIFG and LpMTG. In other words, the activity in these regions increased from sentence onset to end, in parallel with the increase of the left-branching complexity measure. No similar increase was observed in LIPL. Thus, the observed functional segregation during sentence processing of LaMTG and LIPL vs. LIFG and LpMTG is consistent with our observation of differential activation changes in sensitivity to left- vs. right-branching structure. While LIFG, LpMTG, LaMTG and LIPL all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to the subprocesses of unification. Our results speak to the high processing costs of (1) simultaneous unification and (2) maintenance of constituents that are not yet attached to the already unified part of the sentence. Sentences with high left- (compared to right-) branching complexity impose an added load on unification. We show that this added load leads to an increased BOLD response in left perisylvian regions. The results are relevant for understanding the neural underpinnings of the processing difficulty linked to multiple, overlapping non-local dependencies. In conclusion, we used the left- and right branching complexity measures to index this processing difficulty and showed that the unification network operates with similar spatiotemporal dynamics over the course of the sentence, during unification of both written and spoken sentences.
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
    Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
    Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.
  • Acheson, D. J., Veenstra, A., Meyer, A. S., & Hagoort, P. (2014). EEG pattern classification of semantic and syntactic Influences on subject-verb agreement in production. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

    Abstract

    Subject-verb agreement is one of the most common
    grammatical encoding operations in language
    production. In many languages, morphological
    inflection on verbs code for the number of the head
    noun of a subject phrase (e.g., The key to the cabinets
    is rusty). Despite the relative ease with which subjectverb
    agreement is accomplished, people sometimes
    make agreement errors (e.g., The key to the cabinets
    are rusty). Such errors offer a window into the early
    stages of production planning. Agreement errors are
    influenced by both syntactic and semantic factors, and
    are more likely to occur when a sentence contains either
    conceptual or syntactic number mismatches. Little
    is known about the timecourse of these influences,
    however, and some controversy exists as to whether
    they are independent. The current study was designed
    to address these two issues using EEG. Semantic and
    syntactic factors influencing number mismatch were
    factorially-manipulated in a forced-choice sentence
    completion paradigm. To avoid EEG artifact associated
    with speaking, participants (N=20) were presented with
    a noun-phrase, and pressed a button to indicate which
    version of the verb ‘to be’ (is/are) should continue
    the sentence. Semantic number was manipulated
    using preambles that were semantically-integrated or
    unintegrated. Semantic integration refers to the semantic
    relationship between nouns in a noun-phrase, with
    integrated items promoting conceptual-singularity.
    The syntactic manipulation was the number (singular/
    plural) of the local noun preceding the decision. This
    led to preambles such as “The pizza with the yummy
    topping(s)... “ (integated) vs. “The pizza with the tasty
    bevarage(s)...” (unintegrated). Behavioral results showed
    effects of both Local Noun Number and Semantic
    Integration, with more errors and longer reaction times
    occurring in the mismatching conditions (i.e., plural
    local nouns; unintegrated subject phrases). Classic ERP
    analyses locked to the local noun (0-700 ms) and to the
    time preceding the response (-600 to 0 ms) showed no
    systematic differences between conditions. Despite this
    result, we assessed whether difference might emerge
    using multivariate pattern analysis (MVPA). Using the
    same epochs as above, support-vector machines with a
    radial basis function were trained on the single-trial level
    to classify the difference between Local Noun Number
    and Semantic Integration conditions across time and
    channels. Results revealed that both conditions could
    be reliably classified at the single subject level, and
    that classification accuracy was strongest in the epoch
    preceding the response. Classification accuracy was
    at chance when a classifier trained to dissociate Local
    Noun Number was used to predict Semantic Integration
    (and vice versa), providing some evidence of the
    independence of the two effects. Significant inter-subject
    variability was present in the channels and time-points
    that were critical for classification, but earlier timepoints
    were more often important for classifying Local Noun
    Number than Semantic Integration. One result of this
    variability is classification performed across subjects was
    at chance, which may explain the failure to find standard
    ERP effects. This study thus provides an important first
    test of semantic and syntactic influences on subject-verb
    agreement with EEG, and demonstrates that where
    classic ERP analyses fail, MVPA can reliably distinguish
    differences at the neurophysiological level.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Dimitrova, D. V., Snijders, T. M., & Hagoort, P. (2014). Neurobiological attention mechanisms of syntactic and prosodic focusing in spoken language. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

    Abstract

    IIn spoken utterances important or new information is
    often linguistically marked, for instance by prosody
    or syntax. Such highlighting prevents listeners from
    skipping over relevant information. Linguistic cues like
    pitch accents lead to a more elaborate processing of
    important information (Wang et al., 2011). In a recent
    fMRI study, Kristensen et al. (2013) have shown that the
    neurobiological signature of pitch accents is linked to the
    domain-general attention network. This network includes
    the superior and inferior parietal cortex. It is an open
    question whether non-prosodic markers of focus (i.e. the
    important/new information) function similarly on the
    neurobiological level, that is by recruiting the domaingeneral
    attention network. This study tried to address
    this question by testing a syntactic marker of focus. The
    present fMRI study investigates the processing of it-clefts,
    which highlight important information syntactically,
    and compares it to the processing of pitch accents, which
    highlight information prosodically. We further test if
    both linguistic focusing devices recruit domain-general
    attention mechanisms. In the language task, participants
    listened to short stories like “In the beginning of February
    the final exam period was approaching. The student did
    not read the lecture notes”. In the last sentence of each
    story, the new information was focused either by a pitch
    accent as in “He borrowed the BOOK from the library”
    or by an it-cleft like “It was the book that he borrowed
    from the library”. Pitch accents were pronounced without
    exaggerated acoustic emphasis. Two control conditions
    were included: (i) sentences with fronted focus like “The
    book he borrowed from the library”, to account for word
    order differences between sentences with clefts and
    accents, and (ii) sentences without prosodic emphasis
    like ”He borrowed the book from the library”. In the
    attentional localizer task (adopted from Kristensen et al., 2013), participants listened to tones in a dichotic
    listening paradigm. A cue tone was presented in one ear
    and participants responded to a target tone presented
    either in the same or the other ear. In line with Kristensen
    et al. (2013), we found that in the localizer task cue
    tones activated the right inferior parietal cortex and the
    precuneus, and we found additional activations in the
    right superior temporal gyrus. In the language task,
    sentences with it- clefts elicited larger activations in the
    left and right superior temporal gyrus as compared to
    control sentences with fronted focus. For the contrast
    between sentences with pitch accent vs. without pitch
    accent we observed activation in the inferior parietal
    lobe, this activation did however not survive multiple
    comparisons correction. In sum, our findings show that
    syntactic focusing constructions like it-clefts recruit
    the superior temporal gyri, similarly to cue tones in
    the localizer task. Highlighting focus by pitch accent
    activated the parietal cortex in areas overlapping with
    those reported by Kristensen et al. and with those we
    found for cue tones in the localizer task. Our study
    provides novel evidence that prosodic and syntactic
    focusing devices likely have a distinct neurobiological
    signature in spoken language comprehension.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2014). A spiking recurrent neural network for semantic processing. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Sentence processing requires the ability to establish thematic relations between constituents. Here we investigate the computational basis of this ability in a neurobiologically motivated comprehension model. The model has a tripartite architecture where input representations are supplied by the mental lexicon to a network that performs incremental thematic role assignment. Roles are combined into a representation of sentence-level meaning by a downstream system (semantic unification). Recurrent, sparsely connected, spiking networks were used which project a time-varying input signal (word sequences) into a high-dimensional, spatio-temporal pattern of activations. Local, adaptive linear read-out units were then calibrated to map the internal dynamics to desired output (thematic role sequences) [1]. Read-outs were adjusted on network dynamics driven by input sequences drawn from argument-structure templates with small variation in function words and larger variation in content words. Models were trained on sequences of 10K words for 200ms per word at a 1ms resolution, and tested on novel items generated from the language. We found that a static, random recurrent spiking network outperformed models that used only local word information without context. To improve performance, we explored various ways of increasing the model’s processing memory (e.g., network size, time constants, sparseness, input strength, etc.) and employed spiking neurons with more dynamic variables (leaky integrate-and-fire versus Izhikevich-neurons). The largest gain was observed when the model’s input history was extended to include previous words and/or roles. Model behavior was also compared for localist and distributed encodings of word sequences. The latter were obtained by compressing lexical co-occurrence statistics into continuous-valued vectors [2]. We found that performance for localist-input was superior even though distributed representations contained extra information about word context and semantic similarity. Finally, we compared models that received input enriched with combinations of semantic features, word-category, and verb sub-categorization labels. Counter-intuitively, we found that adding this information to the model’s lexical input did not further improve performance. Consistent with previous results, however, performance improved for increased variability in content words [3]. This indicates that the approach to comprehension taken here might scale to more diverse and naturalistic language input. Overall, the results suggest that active processing memory beyond pure state-dependent effects is important for sentence interpretation, and that memory in neurobiological systems might be actively computing [4]. Future work therefore needs to address how the structure of word representations interacts with enhanced processing memory in adaptive spiking networks. [1] Maass W., Natschläger T., & Markram H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: 2531-2560. [2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word represen-tations in vector space. Proceedings of the International Conference on Learning Represen-tations, Scottsdale/AZ. [3] Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin/TX. [4] Petersson, K.M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets. Philo-sophical Transactions of the Royal Society B 367: 1971-1883.
  • Folia, V., Hagoort, P., & Petersson, K. M. (2014). An FMRI study of the interaction between sentence-level syntax and semantics during language comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Hagoort [1] suggested that the posterior temporal cortex is involved in the retrieval of lexical frames that form building blocks for syntactic unification, supported by the inferior frontal gyrus (IFG). FMRI results support the role of the IFG in the unification operations that are performed at the structural/syntactic [2] and conceptual/ semantic levels [3]. While these studies tackle the unification operations within linguistic components, in the present event-related FMRI study we investigated the interplay between sentence-level semantics and syntax by adapting an EEG comprehension paradigm [4]. The ERP results showed typical P600 and N400 effects, while their combined effect revealed an interaction expressed in the N400 component ([CB-SE] - [SY-CR] > 0). Although the N400 component was similar in the correct and syntactic conditions (SY  CR), the combined effect was significantly larger than the effect of semantic anomaly alone. In contrast, the size of the P600 effect was not affected by an additional semantic violation, suggesting an asymmetry between semantic and syntactic processing. In the current FMRI study we characterize this asymmetry by means of a 2x2 experimental design included the conditions: correct (CR), syntactic (SY), semantic (SE), and combined (CB) anomalies. Standard SPM procedures were used for analysis and only clusters significant at P <.05 family-wise error corrected are reported. The main effect of semantic anomaly ([CB+SE] > [SY+CR]) yielded activation in the anterior IFG (BA 45/47). The opposite contrast revealed the theory-ofmind and default-mode network. The main effect of syntactically correct sentences ([SE+CR] > [CB+SY]), showed significant activation in the IFG (BA 44/45), including the mid-anterior insula extending into the superior temporal poles (BA 22/38). In addition, significant effects were observed in medial prefrontal/ anterior cingulate cortex, posterior middle and superior temporal regions (BA 21/22), and the basal ganglia. The reverse contrast yielded activations in the MFG (BA 9/46), the inferior parietal region (BA 39/40), precuneus and the posterior cingulate region. The only region that showed a significant interaction ([CBSE]  [SYCR] > 0) was the left temporo-parietal region (BA 22/39/40). In summary, the results show that the IFG is involved in unification during comprehension. The effect of semantic anomaly and its implied unification load engages the anterior IFG while the effect of syntactic anomaly and its implied unification failure engages MFG. Finally, the results suggest that the syntax of gender agreement interacts with sentence-level semantics in the left temporo-parietal region. [1] Hagoort, P. (2005). On Broca, brain, and binding: A new framework. TICS, 9, 416-423. [2] Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/ cercor/bhn187. [3] Hagoort, P., Hald, L., Baastiansen, M., Petersson, K.M. (2004). Integration of word meaning and world knowledge in language comprehension. Science 304, 438-441. [4] Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15, 883- 899.
  • Fonteijn, H. M., Acheson, D. J., Petersson, K. M., Segaert, K., Snijders, T. M., Udden, J., Willems, R. M., & Hagoort, P. (2014). Overlap and segregation in activation for syntax and semantics: a meta-analysis of 13 fMRI studies. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2014). Assessing the link between speech perception and production through individual differences. Poster presented at the 6th Annual Meeting of the Society for the Neurobiology of Language, Amsterdam.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if
    speech production targets are specified in auditory
    terms, people with better auditory acuity should
    have more precise speech targets.
    To investigate this, we had participants perform
    speech perception and production tasks in a
    counterbalanced order. To assess speech perception
    acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production,
    participants performed a pseudo-word reading task;
    formant values were measured for each recording.
    We predicted that speech production variability to
    correlate inversely with discrimination performance.
    The results suggest that people do vary in their
    production and perceptual abilities, and that better
    discriminators have more distinctive vowel
    production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and
    sheds light on speech production-perception
    interaction.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation. Talk presented at the Donders Discussions 2014. Nijmegen, Netherlands. 2014-10-30 - 2014-10-31.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction
    between the perceptual and the motor system. Recently, prediction via forward models has
    been at the forefront of speech neuroscience research. For example, neuroimaging evidence
    has demonstrated that activation of the auditory cortex is suppressed to self-produced speech
    relative to listening without speaking. This finding has been explained via a forward model
    that predicts the auditory consequences of our own speech actions. An accurate prediction
    cancels out (part of) the auditory cortical activation.
    The present study was designed to test two critical predictions from these frameworks: First,
    whether the cortical auditory response during speech production varies as a function of the
    acoustic distance between feedback and prediction, and second, whether this in turn is predictive
    of the amount of adaptation in people’s speech production. MEG was recorded while
    subjects performed an online speech imitation task. Each subject heard and imitated Dutch
    vowels, varying in their distance from the original vowel in both F1 and F2.
    The results did not show clear evidence that the amount of suppression scaled with the
    distance between participants’ speech and the speech target. However, we found that subjects’
    auditory response did correlate with imitation performance. This result supports the
    view that an enhanced auditory response may act as an error signal, driving subsequent
    speech adaptation. This suggests that individual differences in SIS could act as a marker for
    subsequent adaptation.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production 2014, Geneva.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production, Université de Genève, Geneva, Switzerland.
  • Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Grabe, H., Fisher, S. E., & Francks, C. (2014). Asymmetry within and around the planum temporale is sexually dimorphic and influenced by genes involved in steroid biology. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Hagoort, P., & Indefrey, P. (2014). A meta-analysis on syntactic vs. semantic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Hagoort, P. (2014). De magie van het talige brein. Talk presented at the Paradiso lectures series "Science of Fiction - zin en onzin van de wetenschap in films". Amsterdam, The Netherlands. 2014-02-16.
  • Hagoort, P. (2014). From intonation to information in brain space [Keynote lecture]. Talk presented at The 6th international Conference on Tone and Intonation in Europe 2014. Utrecht. 2014-09-10.
  • Hagoort, P. (2014). Het politieke brein. Talk presented at a Nationaal Initiatief Hersenen en Cognitie (NIHC) publiekslezing. Den Haag, The Netherlands. 2014-03-11.
  • Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at the meeting of the Experimental Psychology Society. London. 2014-01-10.
  • Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at CNBC Colloquium. Pittsburgh (PA-USA). 2014-05-08 - 2014-05-08.

    Abstract

    The classical Wernicke-Lichtheim-Geschwind model of the neurobiology of language was based on an analysis of single word perception and production. However, language processing involves a lot more than production and comprehension of single words. In this talk I will focus on the neurobiological infrastructure for processing language beyond single words. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. I will focus on operations that unify the lexical building blocks into larger structures. MEG, fMRI, resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will discuss to what extent these operations are shared between language comprehension and production. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. In short, I will sketch a picture of language processing from an embrained perspective.

  • Hagoort, P. (2014). The Neurobiology of language: Beyond the sentence given. Talk presented at the Leuven research Institute for Neuroscience & Disease (LIND). Leuven (Belgium). 2014-02-06.
  • Hagoort, P. (2014). Vijf kanttekeningen bij het liberalisme vanuit een cognitief-neurowetenschappelijk perspectief. Talk presented at the Prof. mr. B.M. Teldersstichting (Telders Foundation). Leusden. 2014-01-09.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Getting under your Skin: The role of perspective in narrative comprehension. Talk presented at Cognitive Futures in the Humanities, 2nd International Conference, 24-26 April 2014. University of Durham. 2014-04-24 - 2014-04-26.

    Abstract

    When we read literature, we often become immersed and dive into fictional worlds. The way we perceive those worlds is the result of skillful arrangement of linguistic features by story writers in order to create certain mental representations in the reader. Narrative perspective, or focalization, is an important tool for storywriters to manipulate reader's perception of a story. Despite the fact that narrative perspective is generally considered a fundamental element in narrative comprehension, the cognitive effects on story reading remain unclear. In previous research, various methodologies were employed to investigate the cognitive processes underlying narrative comprehension. However, studies used either self-report procedures or behavioral tests to investigate reader's reactions and refrained from combined methodologies. In the present study we combined skin conductance measurements and questionnaires while participants read short stories in 1st and 3rd person perspective. The results show that immersion, imagery and appreciation are higher when participants read stories in 1st person perspective. To our surprise, we found higher arousal for reading 3rd person perspective compared to 1st person perspective narratives. We find evidence, that individual difference in arousal between the two conditions is related to how much readers empathize with the fictional characters. The combination of methodologies allows us a more differentiated understanding of the underlying mechanisms of immersion. In my talk, I want to highlight how we can gain more from interdisciplinary research and combinations from various methodologies to investigate cognitive processes underlying narrative comprehension under natural conditions.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Narrative perspective influences immersion in fiction reading: Evidence from skin conductance response. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, 2014, Amsterdam, Amsterdan, NL.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Personal pronouns influence arousal during story comprehension. Embodiment and the reading experience. Poster presented at the Embodied and Situated Language Processing Conference 2014, Rotterdam.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2014). Perspective taking and mental simulation in narrative comprehension [Invited talk]. Talk presented at the Max Planck Institute for Empirical Aesthetics, Language and Literature Department. Frankfurt am Main. 2014-06-23.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). The embodied reader: The effect of narrative perspective on literature understanding and appreciation. Talk presented at 14th Conference of the International Society for the Empirical Study of Literature and Media. Turin, Italy. 2014-07-21 - 2014-07-25.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Quebec City, Canada. 2014-07-24 - 2014-07-26.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). Virtual agents as a valid replacement for human partners in sentence processing research. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Hultén, A., Schoffelen, J.-M., Udden, J., Lam, N., & Hagoort, P. (2014). Effects of sentence progression in event-related and rhythmic neural activity measured with MEG. Talk presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014). Amsterdam. 2014-08-27 - 2014-08-29.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A., & Hagoort, P. (2014). Music and language syntax interact in Broca’s area: An fMRI study. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Lam, N., Schoffelen, J.-M., Hultén, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Lam, N. H. L., Schoffelen, J.-M., Hulten, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at BIOMAG 2014, Halifax, Canada.
  • Lam, N. H. L., Hulten, A., Udden, J., Schoffelen, J.-M., & Hagoort, P. (2013). Sentence processing reflected in oscillatory and event-related brain activity. Poster presented at the Fifth Annual Meeting of the Society for the Neurobiology of Language (SNL 2013), San Diego, CA, USA.
  • Levy, J., Hagoort, P., & Demonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipito-temporal junction. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Morphology is the aspect of language concerned with
    the internal structure of words. In the past decades,
    a large body of masked priming (behavioral and
    neuroimaging) data has suggested that the visual
    word recognition system automatically decomposes
    any morphologically complex word into a stem and
    its constituent morphemes. Yet, it remains equivocal
    whether this morphemic decomposition relies primarily
    on orthography or on semantics. Here, we approached
    the issue straightforwardly by applying a task of
    morphological unification, that is, by assembling internal
    (morphemic) units into a whole-word. Morphemic units
    were sequentially presented while participants were
    requested to judge whether their assemblage represented
    real- or pseudo-words. Trials representing real words
    were divided into words with a transparent (true) or a
    non-transparent (pseudo) morphological relationship.
    Morphological unification of truly suffixed words
    occurred in a more straightforward way (shorter RT and
    higher accuracy). Additionally, oscillatory brain activity
    was monitored with magnetoencephalography and
    revealed that real, compared to pseudo morphological unification enhanced narrow gamma band oscillations
    (60-85 Hz, 300-450 ms) in the left posterior occipitotemporal
    junction, which is known as a cerebral hub for
    visual word processing. This neural signature could not
    be explained by a mere automatic lexical processing (i.e.
    stem perception), but more likely it related to a semantic
    access step during the morphological unification process.
    These findings highlight a plausible retrieval of lexical
    semantic associations for enabling true morphological
    unification, and further instantiate the pivotal role of
    the left occipito-temporal junction in visual word form
    processing.
  • Lockwood, G., Tuomainen, J., & Hagoort, P. (2014). Talking sense: Multisensory integration of Japanese ideophones is reflected in the P2. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
  • Petersson, K. M., Folia, S. S. V., Sousa, A.-C., & Hagoort, P. (2014). Implicit structured sequence learning: An EEG study of the structural mere-exposure effect. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2014). Emotional context modulates embodied metaphor comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming in conversation: I am primed by you if you are primed by me. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
  • Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming: How much your conversation partner is primed by you determines how much you are primed by her. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

    Abstract

    In conversation, speakers mimic each other’s (linguistic)
    behavior. For example, speakers are likely to repeat
    each other’s sentence structures: a phenomenon
    known as syntactic priming. In a previous fMRI study
    (Schoot et al., 2014) we reported that the magnitude
    of priming effects is also mimicked between speakers.
    Here, we follow-up on that result. Specifically, we test
    the hypothesis that in a communicative context, the
    priming magnitude of your interlocutor can predict your
    own priming magnitude because you have adapted
    your individual susceptibility to priming to the other
    speaker. 40 participants were divided into 20 pairs who
    performed the experiment together. They were asked
    to describe photographs to each other. Photographs
    depicted two persons performing a transitive action
    (e.g. a man hugging a woman). Participants were
    instructed to describe the photographs with an active
    or a passive sentence depending on the color-coding
    of the photograph (stop light paradigm, Menenti et al.,
    2011). Syntactic priming effects were measured in speech
    onset latencies: a priming effect is found when speakers
    are faster to produce sentences with the same structure
    as the preceding sentence (i.e. two consecutive actives
    or passives) than to produce sentences with a different
    structure (active follows passive or vice versa). Before
    participants performed the communicative task, we ran
    a non-communicative pretest for each participant, to
    measure their individual priming effect without influence
    of the partner’s priming effect. To test whether speakers
    influence each other’s syntactic priming magnitude in
    conversation, we ran an rANCOVA with the syntactic
    priming effect of each participant’s communicative
    partner as a covariate. Results showed that there was
    an interaction between this covariate and Syntactic
    Repetition (F(1,38) = 435.93, p < 0.001). The more your
    partner is primed by you, the more you are primed by
    your partner. In a second analysis, we found that the
    difference between paired speakers’ individual syntactic
    priming effects (as measured in the pretest) predicted
    how much speakers adapt their syntactic priming effects
    when they are communicating with their partner in the
    communicative experiment (ß = -0.467, p < 0.001). That
    means that if your partner’s individual susceptibility for
    syntactic priming is stronger than yours, you will increase
    your own priming magnitude in the communicative
    context. On the other hand, if your partner’s individual
    susceptibility for syntactic priming is less strong, you
    will decrease your priming effect. Furthermore, the
    strength of the in-/decrease is proportional to how
    different you are from your speaker to begin with. We
    interpret the results as follows. Syntactic priming effects
    in conversation are said to result from speakers aligning
    their syntactic representations by mimicking sentence
    structure (Pickering & Garrod, 2004; Jaeger & Snider,
    2013). Here we show that on top of that, the magnitude
    of syntactic priming effects is also mimicked between
    interlocutors. Future research should focus on further
    investigation of the neural correlates of this process, for
    example with fMRI hyper-scanning. Indeed, our findings
    stress the importance of studying language processing in
    real, communicative contexts, which is now also possible
    in neuroimaging paradigms.
  • Segaert, K., Mazaheri, A., Scheeringa, R., & Hagoort, P. (2014). Oscillatory dynamics of syntactic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Segaert, K., & Hagoort, P. (2014). Syntactic priming: A lexical boost, cumulativity, an inverse preference effect and.. A positive preference effect. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
  • Simanova, I., Hagoort, P., Oostenveld, R., & van Gerven, M. (2014). Surface-based searchlight mapping of modality-independent responses to semantic categories using high-resolution fMRI. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Previous studies have shown the possibility to decode
    the semantic category of an object from the fMRI signal in
    different modalities of object presentation. Furthermore,
    by generalizing a classifier across different modalities
    (for instance, from pictures to written words), cortical
    structures that process semantic information in an
    amodal fashion have been identified. In this study we
    employ high-resolution fMRI in combination with
    surface-based searchlight mapping to further explore
    the architecture of modality-independent responses.
    Stimuli of 2 semantic categories (animals and tools)
    were presented in 2 modalities: photographs and
    written words. Stimuli were presented in 40-seconds
    blocks with 10-seconds intervals. Subjects (N=3) were
    instructed to judge whether each stimulus within a
    block was semantically consistent with the others. The
    experiment also included 8 free recall blocks, in which
    name of a category appeared on the screen for 2 seconds,
    followed by 40 seconds of a blank screen. In theses blocks
    subjects were instructed to covertly recall all entities
    from the probed category that they had seen during the
    experiment. Subjects were scanned with 7 Tesla MRIscanner,
    using 3D EPI sequence with isotropic resolution
    of 1.5 mm. In each subject, reconstruction of cortical
    surface was performed. After that, for each vertex on the
    surface, a set of adjacent voxels in the functional volume
    was assigned. Subsequently, a linear support vector
    machine classifier was used to decode object category in
    each surface-based patch. Generalization analysis across
    picture and written word presentation was performed,
    where the classifier was trained on the fMRI data from
    blocks of written words, and tested on the data from picture blocks, and vice versa. The second analysis was
    performed on the free recall blocks, where the classifier
    was trained on merged data from pictures and written
    words blocks, and tested on the free recall blocks.
    Further, we explored how the decoding accuracy in the
    inferior temporal cortex changes with the diameter of the
    searchlight patch. Since surface-based voxel grouping
    takes into account the cortical folding and ensures that
    voxels belonging to different gyri do not fall in the same
    searchlight group, it allows answering the question,
    at what spatial scale is the modality-independent
    information is represented. The cross-modal analysis in
    all three subjects revealed a cluster of voxels in inferior
    temporal cortex (lateral fusiform and inferotemporal gyri)
    and posterior middle temporal gyrus. The topography
    of significant clusters also suggested involvement of
    the inferior frontal gyrus, lateral prefrontal cortex, and
    medial prefrontal cortex. Interestingly, these areas were
    the most evident in the free recall test, although the
    searchlight maps of the three subjects showed substantial
    individual differences in this analysis. Overall, the data
    yield a similar picture as previous research, highlighting
    the role of IT/pMTG and prefrontal cortex in the crossmodal
    semantic representation. We further extended
    previous research, by showing that the classification
    accuracy in these areas decreases with the increase of
    the searchlight patch size. These results indicate that the
    modality-independent categorical activations in the IT
    cortex are represented on the spatial scale of millimetres.
  • Stolk, A., Noordzij, M., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). How minds meet: Cerebral coherence between communicators marks the emergence of meaning. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Does language production use response conflict monitoring?. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Although monitoring and subsequent control have
    received quite some attention for cognitive systems
    other than language, few studies have probed the neural
    mechanisms underlying monitoring and control in overt
    speech production. Recently, it has been hypothesized
    that conflict signals within the language production
    system might serve as cues to increase monitoring
    and control (Nozari, Dell & Schwartz, 2011; Cognitive
    Psychology). This hypothesis was linked directly to the
    conflict monitoring hypothesis in non-linguistic action
    control, which hypothesizes that one of the critical
    cues to self-monitoring is the co-activation of multiple
    response candidates (Yeung, Botvinick & Cohen, 2004;
    Psychological Review). A region of the medial prefrontal
    cortex (mPFC), the dorsal anterior cingulate cortex
    (dACC), as well as the basal ganglia have consistently
    been observed in both errors of commission and high
    conflict.. Hence these regions serve as an important
    testing ground for whether comparable monitoring
    mechanisms are at play in language production. The
    current study tests whether these regions are also
    implicated in response to speech errors and high conflict
    situations that precede the response. 32 native Dutch
    subjects performed a tongue twister task and a factorial
    combination of the Simon and Flanker task. In the tongue
    twister task, participants overtly produced a string of
    4 nonwords 3 times. In tongue twister trials (TT), the
    onset phonemes followed a pattern of A-B-B-A, whereas
    rhymes followed an A-B-A-B pattern (e.g. wep ruust
    rep wuust). In non-tongue twister trials (nonTT), the
    nonwords contained minimal phonological overlap
    (e.g. jots brauk woelp zieg). These two conditions
    correspond to a high conflict and a low conflict condition
    respectively. In an arrow version of the the Simon-
    Flanker task, subjects responded to the direction of a
    middle arrow while flanking arrows faced in the same
    (i.e., congruent; >>>>>) or different (i.e., incongruent;
    >><>>) directions. These stimuli were presented either
    on the right side or the left side of the screen, potentially
    creating a spatial incongruency with their response
    as well. Behavioral results demonstrated sensitivity
    to conflict in both tasks, as subjects generated more
    speech errors in tongue twister trials than non-tongue
    twister trials, and were slower to incongruent relative
    to congruent flanker trials. No difference between
    spatial incongruency was observed. Neuroimaging
    results showed that activation in the ACC significantly
    increased in response to the high conflict flanker trials.
    In addition, regions of interest analyses in the basal
    ganglia showed a significant difference between correct
    high and low conflict flanker trials in the left putamen
    and right caudate nucleus. For the tongue twister task,
    a large region in the mPFC - overlapping with the ACC
    region from the flanker task - was significantly more
    active in response to errors than correct trials. Significant
    differences were also found in the left and right caudate
    nuclei and left putamen. No differences were found
    between correct TT and nonTT trials. The study therefore
    provides evidence for overlap in monitoring between
    language production and non-linguistic action at the
    time of response (i.e. errors), but little evidence for preresponse
    conflict engaging the same system.
  • Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Are there shared mechanisms of response conflict monitoring in speech production and choice reaction tasks?. Poster presented at the International Workshop on Language Production 2014, Geneva.
  • Udden, J., Hulten, A., Fonteijn, H. M., Petersson, K. M., & Hagoort, P. (2014). The middle temporal and inferior parietal cortex contributions to inferior frontal unification across complex sentences. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2014). Taking the listener into account: Computing common ground requires mentalising. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    In order to communicate efficiently, speakers have to
    take into account which information they share with their
    addressee (common ground) and which information
    they do not share (privileged ground). Two views
    have emerged about how and when common ground
    influences language production. In one view, speakers
    take common ground into account early on during
    utterance planning (e.g., Brennan & Hanna, 2009).
    Alternatively, it has been proposed that speakers’ initial
    utterance plans are egocentric, but that they monitor
    their plans and revise them if needed (Horton & Keysar,
    1996). In an fMRI study, we investigated which neural
    mechanisms support speakers’ ability to take into account
    common ground, and at what stage during speech
    planning these mechanisms come into play. We tested
    22 pairs of native Dutch speakers (20 pairs retained in
    the analysis), who were assigned to the roles of speaker
    or listener for the course of the experiment. The speaker
    performed the experiment in the MRI scanner, while the
    listener sat behind a computer in the MRI control room.
    The speaker performed a communicative and a noncommunicative
    task in the scanner. The communicative
    task was a referential communication game in which
    the speaker described objects in an array to the listener.
    The listener could hear the speaker’s descriptions over
    headphones and tried to select the intended object on
    her screen using a mouse. We manipulated common
    ground within the communicative task. In the privileged
    ground condition, the speaker saw additional competitor
    objects that were occluded from the listener’s point of
    view. In order to communicate efficiently, the speaker
    had to ignore the occluded competitor objects. In the
    control conditions, all relevant objects were in common
    ground. The non-communicative task was identical to
    the communicative task, except that the speaker was
    instructed to describe the objects without the listener
    listening. When comparing the BOLD response during
    speech planning in the communicative and the noncommunicative
    tasks, we found activations in the right
    medial prefrontal cortex and bilateral insula, brain areas
    involved in mentalizing and empathy. These results
    confirm previous neuroimaging research that found that
    speaking in a communicative context as compared to a
    non-communicative context activates brain areas that
    are involved in mentalizing (Sassa et al., 2007; Willems
    et al., 2010). We also contrasted brain activity in the
    privileged ground and control conditions within the
    communicative task to tap into the neural mechanisms
    that allow speakers to take common ground into account.
    We again found activity in brain regions involved in
    mentalizing and visual perspective-taking (the bilateral
    temporo-parietal junction and medial prefrontal cortex).
    In addition, we found a cluster in the dorsolateral
    prefrontal cortex, a brain area that has previously been
    proposed to support the inhibition of task-irrelevant
    perspectives (Ramsey et al., 2013). Interestingly, these
    clusters are located outside the traditional language
    network. Our results suggest that speakers engage in
    mentalizing and visual perspective-taking during speech
    planning in order to compute common ground rather
    than monitoring and adjusting their initial egocentric
    utterance plans.
  • Willems, R. M., Frank, S., Nijhof, A., Hagoort, P., & van den Bosch, A. (2014). Prediction influences brain areas early in the neural language network. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Zaadnoordijk, L., Udden, J., Hulten, A., Hagoort, P., & Fonteijn, H. M. (2014). Between-subject variance in resting-state fMRI connectivity predicts fMRI activation in a language task. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

Share this page