Displaying 1 - 84 of 84
-
Acheson, D. J., Veenstra, A., Meyer, A. S., & Hagoort, P. (2014). EEG pattern classification of semantic and syntactic Influences on subject-verb agreement in production. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
Subject-verb agreement is one of the most common
grammatical encoding operations in language
production. In many languages, morphological
inflection on verbs code for the number of the head
noun of a subject phrase (e.g., The key to the cabinets
is rusty). Despite the relative ease with which subjectverb
agreement is accomplished, people sometimes
make agreement errors (e.g., The key to the cabinets
are rusty). Such errors offer a window into the early
stages of production planning. Agreement errors are
influenced by both syntactic and semantic factors, and
are more likely to occur when a sentence contains either
conceptual or syntactic number mismatches. Little
is known about the timecourse of these influences,
however, and some controversy exists as to whether
they are independent. The current study was designed
to address these two issues using EEG. Semantic and
syntactic factors influencing number mismatch were
factorially-manipulated in a forced-choice sentence
completion paradigm. To avoid EEG artifact associated
with speaking, participants (N=20) were presented with
a noun-phrase, and pressed a button to indicate which
version of the verb ‘to be’ (is/are) should continue
the sentence. Semantic number was manipulated
using preambles that were semantically-integrated or
unintegrated. Semantic integration refers to the semantic
relationship between nouns in a noun-phrase, with
integrated items promoting conceptual-singularity.
The syntactic manipulation was the number (singular/
plural) of the local noun preceding the decision. This
led to preambles such as “The pizza with the yummy
topping(s)... “ (integated) vs. “The pizza with the tasty
bevarage(s)...” (unintegrated). Behavioral results showed
effects of both Local Noun Number and Semantic
Integration, with more errors and longer reaction times
occurring in the mismatching conditions (i.e., plural
local nouns; unintegrated subject phrases). Classic ERP
analyses locked to the local noun (0-700 ms) and to the
time preceding the response (-600 to 0 ms) showed no
systematic differences between conditions. Despite this
result, we assessed whether difference might emerge
using multivariate pattern analysis (MVPA). Using the
same epochs as above, support-vector machines with a
radial basis function were trained on the single-trial level
to classify the difference between Local Noun Number
and Semantic Integration conditions across time and
channels. Results revealed that both conditions could
be reliably classified at the single subject level, and
that classification accuracy was strongest in the epoch
preceding the response. Classification accuracy was
at chance when a classifier trained to dissociate Local
Noun Number was used to predict Semantic Integration
(and vice versa), providing some evidence of the
independence of the two effects. Significant inter-subject
variability was present in the channels and time-points
that were critical for classification, but earlier timepoints
were more often important for classifying Local Noun
Number than Semantic Integration. One result of this
variability is classification performed across subjects was
at chance, which may explain the failure to find standard
ERP effects. This study thus provides an important first
test of semantic and syntactic influences on subject-verb
agreement with EEG, and demonstrates that where
classic ERP analyses fail, MVPA can reliably distinguish
differences at the neurophysiological level. -
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech. -
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
-
Dimitrova, D. V., Snijders, T. M., & Hagoort, P. (2014). Neurobiological attention mechanisms of syntactic and prosodic focusing in spoken language. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
IIn spoken utterances important or new information is
often linguistically marked, for instance by prosody
or syntax. Such highlighting prevents listeners from
skipping over relevant information. Linguistic cues like
pitch accents lead to a more elaborate processing of
important information (Wang et al., 2011). In a recent
fMRI study, Kristensen et al. (2013) have shown that the
neurobiological signature of pitch accents is linked to the
domain-general attention network. This network includes
the superior and inferior parietal cortex. It is an open
question whether non-prosodic markers of focus (i.e. the
important/new information) function similarly on the
neurobiological level, that is by recruiting the domaingeneral
attention network. This study tried to address
this question by testing a syntactic marker of focus. The
present fMRI study investigates the processing of it-clefts,
which highlight important information syntactically,
and compares it to the processing of pitch accents, which
highlight information prosodically. We further test if
both linguistic focusing devices recruit domain-general
attention mechanisms. In the language task, participants
listened to short stories like “In the beginning of February
the final exam period was approaching. The student did
not read the lecture notes”. In the last sentence of each
story, the new information was focused either by a pitch
accent as in “He borrowed the BOOK from the library”
or by an it-cleft like “It was the book that he borrowed
from the library”. Pitch accents were pronounced without
exaggerated acoustic emphasis. Two control conditions
were included: (i) sentences with fronted focus like “The
book he borrowed from the library”, to account for word
order differences between sentences with clefts and
accents, and (ii) sentences without prosodic emphasis
like ”He borrowed the book from the library”. In the
attentional localizer task (adopted from Kristensen et al., 2013), participants listened to tones in a dichotic
listening paradigm. A cue tone was presented in one ear
and participants responded to a target tone presented
either in the same or the other ear. In line with Kristensen
et al. (2013), we found that in the localizer task cue
tones activated the right inferior parietal cortex and the
precuneus, and we found additional activations in the
right superior temporal gyrus. In the language task,
sentences with it- clefts elicited larger activations in the
left and right superior temporal gyrus as compared to
control sentences with fronted focus. For the contrast
between sentences with pitch accent vs. without pitch
accent we observed activation in the inferior parietal
lobe, this activation did however not survive multiple
comparisons correction. In sum, our findings show that
syntactic focusing constructions like it-clefts recruit
the superior temporal gyri, similarly to cue tones in
the localizer task. Highlighting focus by pitch accent
activated the parietal cortex in areas overlapping with
those reported by Kristensen et al. and with those we
found for cue tones in the localizer task. Our study
provides novel evidence that prosodic and syntactic
focusing devices likely have a distinct neurobiological
signature in spoken language comprehension.Additional information
http://www.neurolang.org/programs/SNL2014_Program_with_Abstracts.pdf -
Fitz, H., Hagoort, P., & Petersson, K. M. (2014). A spiking recurrent neural network for semantic processing. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Sentence processing requires the ability to establish thematic relations between constituents. Here we investigate the computational basis of this ability in a neurobiologically motivated comprehension model. The model has a tripartite architecture where input representations are supplied by the mental lexicon to a network that performs incremental thematic role assignment. Roles are combined into a representation of sentence-level meaning by a downstream system (semantic unification). Recurrent, sparsely connected, spiking networks were used which project a time-varying input signal (word sequences) into a high-dimensional, spatio-temporal pattern of activations. Local, adaptive linear read-out units were then calibrated to map the internal dynamics to desired output (thematic role sequences) [1]. Read-outs were adjusted on network dynamics driven by input sequences drawn from argument-structure templates with small variation in function words and larger variation in content words. Models were trained on sequences of 10K words for 200ms per word at a 1ms resolution, and tested on novel items generated from the language. We found that a static, random recurrent spiking network outperformed models that used only local word information without context. To improve performance, we explored various ways of increasing the model’s processing memory (e.g., network size, time constants, sparseness, input strength, etc.) and employed spiking neurons with more dynamic variables (leaky integrate-and-fire versus Izhikevich-neurons). The largest gain was observed when the model’s input history was extended to include previous words and/or roles. Model behavior was also compared for localist and distributed encodings of word sequences. The latter were obtained by compressing lexical co-occurrence statistics into continuous-valued vectors [2]. We found that performance for localist-input was superior even though distributed representations contained extra information about word context and semantic similarity. Finally, we compared models that received input enriched with combinations of semantic features, word-category, and verb sub-categorization labels. Counter-intuitively, we found that adding this information to the model’s lexical input did not further improve performance. Consistent with previous results, however, performance improved for increased variability in content words [3]. This indicates that the approach to comprehension taken here might scale to more diverse and naturalistic language input. Overall, the results suggest that active processing memory beyond pure state-dependent effects is important for sentence interpretation, and that memory in neurobiological systems might be actively computing [4]. Future work therefore needs to address how the structure of word representations interacts with enhanced processing memory in adaptive spiking networks. [1] Maass W., Natschläger T., & Markram H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: 2531-2560. [2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word represen-tations in vector space. Proceedings of the International Conference on Learning Represen-tations, Scottsdale/AZ. [3] Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin/TX. [4] Petersson, K.M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets. Philo-sophical Transactions of the Royal Society B 367: 1971-1883. -
Folia, V., Hagoort, P., & Petersson, K. M. (2014). An FMRI study of the interaction between sentence-level syntax and semantics during language comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Hagoort [1] suggested that the posterior temporal cortex is involved in the retrieval of lexical frames that form building blocks for syntactic unification, supported by the inferior frontal gyrus (IFG). FMRI results support the role of the IFG in the unification operations that are performed at the structural/syntactic [2] and conceptual/ semantic levels [3]. While these studies tackle the unification operations within linguistic components, in the present event-related FMRI study we investigated the interplay between sentence-level semantics and syntax by adapting an EEG comprehension paradigm [4]. The ERP results showed typical P600 and N400 effects, while their combined effect revealed an interaction expressed in the N400 component ([CB-SE] - [SY-CR] > 0). Although the N400 component was similar in the correct and syntactic conditions (SY CR), the combined effect was significantly larger than the effect of semantic anomaly alone. In contrast, the size of the P600 effect was not affected by an additional semantic violation, suggesting an asymmetry between semantic and syntactic processing. In the current FMRI study we characterize this asymmetry by means of a 2x2 experimental design included the conditions: correct (CR), syntactic (SY), semantic (SE), and combined (CB) anomalies. Standard SPM procedures were used for analysis and only clusters significant at P <.05 family-wise error corrected are reported. The main effect of semantic anomaly ([CB+SE] > [SY+CR]) yielded activation in the anterior IFG (BA 45/47). The opposite contrast revealed the theory-ofmind and default-mode network. The main effect of syntactically correct sentences ([SE+CR] > [CB+SY]), showed significant activation in the IFG (BA 44/45), including the mid-anterior insula extending into the superior temporal poles (BA 22/38). In addition, significant effects were observed in medial prefrontal/ anterior cingulate cortex, posterior middle and superior temporal regions (BA 21/22), and the basal ganglia. The reverse contrast yielded activations in the MFG (BA 9/46), the inferior parietal region (BA 39/40), precuneus and the posterior cingulate region. The only region that showed a significant interaction ([CBSE] [SYCR] > 0) was the left temporo-parietal region (BA 22/39/40). In summary, the results show that the IFG is involved in unification during comprehension. The effect of semantic anomaly and its implied unification load engages the anterior IFG while the effect of syntactic anomaly and its implied unification failure engages MFG. Finally, the results suggest that the syntax of gender agreement interacts with sentence-level semantics in the left temporo-parietal region. [1] Hagoort, P. (2005). On Broca, brain, and binding: A new framework. TICS, 9, 416-423. [2] Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/ cercor/bhn187. [3] Hagoort, P., Hald, L., Baastiansen, M., Petersson, K.M. (2004). Integration of word meaning and world knowledge in language comprehension. Science 304, 438-441. [4] Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15, 883- 899. -
Fonteijn, H. M., Acheson, D. J., Petersson, K. M., Segaert, K., Snijders, T. M., Udden, J., Willems, R. M., & Hagoort, P. (2014). Overlap and segregation in activation for syntax and semantics: a meta-analysis of 13 fMRI studies. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2014). Assessing the link between speech perception and production through individual differences. Poster presented at the 6th Annual Meeting of the Society for the Neurobiology of Language, Amsterdam.
Abstract
This study aims to test a prediction of recent
theoretical frameworks in speech motor control: if
speech production targets are specified in auditory
terms, people with better auditory acuity should
have more precise speech targets.
To investigate this, we had participants perform
speech perception and production tasks in a
counterbalanced order. To assess speech perception
acuity, we used an adaptive speech discrimination
task. To assess variability in speech production,
participants performed a pseudo-word reading task;
formant values were measured for each recording.
We predicted that speech production variability to
correlate inversely with discrimination performance.
The results suggest that people do vary in their
production and perceptual abilities, and that better
discriminators have more distinctive vowel
production targets, confirming our prediction. This
study highlights the importance of individual
differences in the study of speech motor control, and
sheds light on speech production-perception
interaction. -
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation. Talk presented at the Donders Discussions 2014. Nijmegen, Netherlands. 2014-10-30 - 2014-10-31.
Abstract
Speech production is one of the most complex motor skills, and involves close interaction
between the perceptual and the motor system. Recently, prediction via forward models has
been at the forefront of speech neuroscience research. For example, neuroimaging evidence
has demonstrated that activation of the auditory cortex is suppressed to self-produced speech
relative to listening without speaking. This finding has been explained via a forward model
that predicts the auditory consequences of our own speech actions. An accurate prediction
cancels out (part of) the auditory cortical activation.
The present study was designed to test two critical predictions from these frameworks: First,
whether the cortical auditory response during speech production varies as a function of the
acoustic distance between feedback and prediction, and second, whether this in turn is predictive
of the amount of adaptation in people’s speech production. MEG was recorded while
subjects performed an online speech imitation task. Each subject heard and imitated Dutch
vowels, varying in their distance from the original vowel in both F1 and F2.
The results did not show clear evidence that the amount of suppression scaled with the
distance between participants’ speech and the speech target. However, we found that subjects’
auditory response did correlate with imitation performance. This result supports the
view that an enhanced auditory response may act as an error signal, driving subsequent
speech adaptation. This suggests that individual differences in SIS could act as a marker for
subsequent adaptation. -
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production, Université de Genève, Geneva, Switzerland.
-
Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Grabe, H., Fisher, S. E., & Francks, C. (2014). Asymmetry within and around the planum temporale is sexually dimorphic and influenced by genes involved in steroid biology. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hagoort, P., & Indefrey, P. (2014). A meta-analysis on syntactic vs. semantic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hagoort, P. (2014). De magie van het talige brein. Talk presented at the Paradiso lectures series "Science of Fiction - zin en onzin van de wetenschap in films". Amsterdam, The Netherlands. 2014-02-16.
-
Hagoort, P. (2014). From intonation to information in brain space [Keynote lecture]. Talk presented at The 6th international Conference on Tone and Intonation in Europe 2014. Utrecht. 2014-09-10.
-
Hagoort, P. (2014). Het politieke brein. Talk presented at a Nationaal Initiatief Hersenen en Cognitie (NIHC) publiekslezing. Den Haag, The Netherlands. 2014-03-11.
-
Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at the meeting of the Experimental Psychology Society. London. 2014-01-10.
-
Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at CNBC Colloquium. Pittsburgh (PA-USA). 2014-05-08 - 2014-05-08.
Abstract
The classical Wernicke-Lichtheim-Geschwind model of the neurobiology of language was based on an analysis of single word perception and production. However, language processing involves a lot more than production and comprehension of single words. In this talk I will focus on the neurobiological infrastructure for processing language beyond single words. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. I will focus on operations that unify the lexical building blocks into larger structures. MEG, fMRI, resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will discuss to what extent these operations are shared between language comprehension and production. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. In short, I will sketch a picture of language processing from an embrained perspective.
-
Hagoort, P. (2014). The Neurobiology of language: Beyond the sentence given. Talk presented at the Leuven research Institute for Neuroscience & Disease (LIND). Leuven (Belgium). 2014-02-06.
-
Hagoort, P. (2014). Vijf kanttekeningen bij het liberalisme vanuit een cognitief-neurowetenschappelijk perspectief. Talk presented at the Prof. mr. B.M. Teldersstichting (Telders Foundation). Leusden. 2014-01-09.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Getting under your Skin: The role of perspective in narrative comprehension. Talk presented at Cognitive Futures in the Humanities, 2nd International Conference, 24-26 April 2014. University of Durham. 2014-04-24 - 2014-04-26.
Abstract
When we read literature, we often become immersed and dive into fictional worlds. The way we perceive those worlds is the result of skillful arrangement of linguistic features by story writers in order to create certain mental representations in the reader. Narrative perspective, or focalization, is an important tool for storywriters to manipulate reader's perception of a story. Despite the fact that narrative perspective is generally considered a fundamental element in narrative comprehension, the cognitive effects on story reading remain unclear. In previous research, various methodologies were employed to investigate the cognitive processes underlying narrative comprehension. However, studies used either self-report procedures or behavioral tests to investigate reader's reactions and refrained from combined methodologies. In the present study we combined skin conductance measurements and questionnaires while participants read short stories in 1st and 3rd person perspective. The results show that immersion, imagery and appreciation are higher when participants read stories in 1st person perspective. To our surprise, we found higher arousal for reading 3rd person perspective compared to 1st person perspective narratives. We find evidence, that individual difference in arousal between the two conditions is related to how much readers empathize with the fictional characters. The combination of methodologies allows us a more differentiated understanding of the underlying mechanisms of immersion. In my talk, I want to highlight how we can gain more from interdisciplinary research and combinations from various methodologies to investigate cognitive processes underlying narrative comprehension under natural conditions. -
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Narrative perspective influences immersion in fiction reading: Evidence from skin conductance response. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, 2014, Amsterdam, Amsterdan, NL.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Personal pronouns influence arousal during story comprehension. Embodiment and the reading experience. Poster presented at the Embodied and Situated Language Processing Conference 2014, Rotterdam.
-
Hartung, F., Hagoort, P., & Willems, R. M. (2014). Perspective taking and mental simulation in narrative comprehension [Invited talk]. Talk presented at the Max Planck Institute for Empirical Aesthetics, Language and Literature Department. Frankfurt am Main. 2014-06-23.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). The embodied reader: The effect of narrative perspective on literature understanding and appreciation. Talk presented at 14th Conference of the International Society for the Empirical Study of Literature and Media. Turin, Italy. 2014-07-21 - 2014-07-25.
-
Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Quebec City, Canada. 2014-07-24 - 2014-07-26.
-
Heyselaar, E., Hagoort, P., & Segaert, K. (2014). Virtual agents as a valid replacement for human partners in sentence processing research. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hultén, A., Schoffelen, J.-M., Udden, J., Lam, N., & Hagoort, P. (2014). Effects of sentence progression in event-related and rhythmic neural activity measured with MEG. Talk presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014). Amsterdam. 2014-08-27 - 2014-08-29.
-
Kunert, R., Willems, R. M., Casasanto, D., Patel, A., & Hagoort, P. (2014). Music and language syntax interact in Broca’s area: An fMRI study. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Lam, N., Schoffelen, J.-M., Hultén, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Lam, N. H. L., Schoffelen, J.-M., Hulten, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at BIOMAG 2014, Halifax, Canada.
-
Lam, N. H. L., Hulten, A., Udden, J., Schoffelen, J.-M., & Hagoort, P. (2013). Sentence processing reflected in oscillatory and event-related brain activity. Poster presented at the Fifth Annual Meeting of the Society for the Neurobiology of Language (SNL 2013), San Diego, CA, USA.
-
Levy, J., Hagoort, P., & Demonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipito-temporal junction. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Morphology is the aspect of language concerned with
the internal structure of words. In the past decades,
a large body of masked priming (behavioral and
neuroimaging) data has suggested that the visual
word recognition system automatically decomposes
any morphologically complex word into a stem and
its constituent morphemes. Yet, it remains equivocal
whether this morphemic decomposition relies primarily
on orthography or on semantics. Here, we approached
the issue straightforwardly by applying a task of
morphological unification, that is, by assembling internal
(morphemic) units into a whole-word. Morphemic units
were sequentially presented while participants were
requested to judge whether their assemblage represented
real- or pseudo-words. Trials representing real words
were divided into words with a transparent (true) or a
non-transparent (pseudo) morphological relationship.
Morphological unification of truly suffixed words
occurred in a more straightforward way (shorter RT and
higher accuracy). Additionally, oscillatory brain activity
was monitored with magnetoencephalography and
revealed that real, compared to pseudo morphological unification enhanced narrow gamma band oscillations
(60-85 Hz, 300-450 ms) in the left posterior occipitotemporal
junction, which is known as a cerebral hub for
visual word processing. This neural signature could not
be explained by a mere automatic lexical processing (i.e.
stem perception), but more likely it related to a semantic
access step during the morphological unification process.
These findings highlight a plausible retrieval of lexical
semantic associations for enabling true morphological
unification, and further instantiate the pivotal role of
the left occipito-temporal junction in visual word form
processing. -
Lockwood, G., Tuomainen, J., & Hagoort, P. (2014). Talking sense: Multisensory integration of Japanese ideophones is reflected in the P2. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
-
Petersson, K. M., Folia, S. S. V., Sousa, A.-C., & Hagoort, P. (2014). Implicit structured sequence learning: An EEG study of the structural mere-exposure effect. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2014). Emotional context modulates embodied metaphor comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming in conversation: I am primed by you if you are primed by me. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
-
Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming: How much your conversation partner is primed by you determines how much you are primed by her. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
In conversation, speakers mimic each other’s (linguistic)
behavior. For example, speakers are likely to repeat
each other’s sentence structures: a phenomenon
known as syntactic priming. In a previous fMRI study
(Schoot et al., 2014) we reported that the magnitude
of priming effects is also mimicked between speakers.
Here, we follow-up on that result. Specifically, we test
the hypothesis that in a communicative context, the
priming magnitude of your interlocutor can predict your
own priming magnitude because you have adapted
your individual susceptibility to priming to the other
speaker. 40 participants were divided into 20 pairs who
performed the experiment together. They were asked
to describe photographs to each other. Photographs
depicted two persons performing a transitive action
(e.g. a man hugging a woman). Participants were
instructed to describe the photographs with an active
or a passive sentence depending on the color-coding
of the photograph (stop light paradigm, Menenti et al.,
2011). Syntactic priming effects were measured in speech
onset latencies: a priming effect is found when speakers
are faster to produce sentences with the same structure
as the preceding sentence (i.e. two consecutive actives
or passives) than to produce sentences with a different
structure (active follows passive or vice versa). Before
participants performed the communicative task, we ran
a non-communicative pretest for each participant, to
measure their individual priming effect without influence
of the partner’s priming effect. To test whether speakers
influence each other’s syntactic priming magnitude in
conversation, we ran an rANCOVA with the syntactic
priming effect of each participant’s communicative
partner as a covariate. Results showed that there was
an interaction between this covariate and Syntactic
Repetition (F(1,38) = 435.93, p < 0.001). The more your
partner is primed by you, the more you are primed by
your partner. In a second analysis, we found that the
difference between paired speakers’ individual syntactic
priming effects (as measured in the pretest) predicted
how much speakers adapt their syntactic priming effects
when they are communicating with their partner in the
communicative experiment (ß = -0.467, p < 0.001). That
means that if your partner’s individual susceptibility for
syntactic priming is stronger than yours, you will increase
your own priming magnitude in the communicative
context. On the other hand, if your partner’s individual
susceptibility for syntactic priming is less strong, you
will decrease your priming effect. Furthermore, the
strength of the in-/decrease is proportional to how
different you are from your speaker to begin with. We
interpret the results as follows. Syntactic priming effects
in conversation are said to result from speakers aligning
their syntactic representations by mimicking sentence
structure (Pickering & Garrod, 2004; Jaeger & Snider,
2013). Here we show that on top of that, the magnitude
of syntactic priming effects is also mimicked between
interlocutors. Future research should focus on further
investigation of the neural correlates of this process, for
example with fMRI hyper-scanning. Indeed, our findings
stress the importance of studying language processing in
real, communicative contexts, which is now also possible
in neuroimaging paradigms. -
Segaert, K., Mazaheri, A., Scheeringa, R., & Hagoort, P. (2014). Oscillatory dynamics of syntactic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Segaert, K., & Hagoort, P. (2014). Syntactic priming: A lexical boost, cumulativity, an inverse preference effect and.. A positive preference effect. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
-
Simanova, I., Hagoort, P., Oostenveld, R., & van Gerven, M. (2014). Surface-based searchlight mapping of modality-independent responses to semantic categories using high-resolution fMRI. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Previous studies have shown the possibility to decode
the semantic category of an object from the fMRI signal in
different modalities of object presentation. Furthermore,
by generalizing a classifier across different modalities
(for instance, from pictures to written words), cortical
structures that process semantic information in an
amodal fashion have been identified. In this study we
employ high-resolution fMRI in combination with
surface-based searchlight mapping to further explore
the architecture of modality-independent responses.
Stimuli of 2 semantic categories (animals and tools)
were presented in 2 modalities: photographs and
written words. Stimuli were presented in 40-seconds
blocks with 10-seconds intervals. Subjects (N=3) were
instructed to judge whether each stimulus within a
block was semantically consistent with the others. The
experiment also included 8 free recall blocks, in which
name of a category appeared on the screen for 2 seconds,
followed by 40 seconds of a blank screen. In theses blocks
subjects were instructed to covertly recall all entities
from the probed category that they had seen during the
experiment. Subjects were scanned with 7 Tesla MRIscanner,
using 3D EPI sequence with isotropic resolution
of 1.5 mm. In each subject, reconstruction of cortical
surface was performed. After that, for each vertex on the
surface, a set of adjacent voxels in the functional volume
was assigned. Subsequently, a linear support vector
machine classifier was used to decode object category in
each surface-based patch. Generalization analysis across
picture and written word presentation was performed,
where the classifier was trained on the fMRI data from
blocks of written words, and tested on the data from picture blocks, and vice versa. The second analysis was
performed on the free recall blocks, where the classifier
was trained on merged data from pictures and written
words blocks, and tested on the free recall blocks.
Further, we explored how the decoding accuracy in the
inferior temporal cortex changes with the diameter of the
searchlight patch. Since surface-based voxel grouping
takes into account the cortical folding and ensures that
voxels belonging to different gyri do not fall in the same
searchlight group, it allows answering the question,
at what spatial scale is the modality-independent
information is represented. The cross-modal analysis in
all three subjects revealed a cluster of voxels in inferior
temporal cortex (lateral fusiform and inferotemporal gyri)
and posterior middle temporal gyrus. The topography
of significant clusters also suggested involvement of
the inferior frontal gyrus, lateral prefrontal cortex, and
medial prefrontal cortex. Interestingly, these areas were
the most evident in the free recall test, although the
searchlight maps of the three subjects showed substantial
individual differences in this analysis. Overall, the data
yield a similar picture as previous research, highlighting
the role of IT/pMTG and prefrontal cortex in the crossmodal
semantic representation. We further extended
previous research, by showing that the classification
accuracy in these areas decreases with the increase of
the searchlight patch size. These results indicate that the
modality-independent categorical activations in the IT
cortex are represented on the spatial scale of millimetres. -
Stolk, A., Noordzij, M., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). How minds meet: Cerebral coherence between communicators marks the emergence of meaning. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Does language production use response conflict monitoring?. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Although monitoring and subsequent control have
received quite some attention for cognitive systems
other than language, few studies have probed the neural
mechanisms underlying monitoring and control in overt
speech production. Recently, it has been hypothesized
that conflict signals within the language production
system might serve as cues to increase monitoring
and control (Nozari, Dell & Schwartz, 2011; Cognitive
Psychology). This hypothesis was linked directly to the
conflict monitoring hypothesis in non-linguistic action
control, which hypothesizes that one of the critical
cues to self-monitoring is the co-activation of multiple
response candidates (Yeung, Botvinick & Cohen, 2004;
Psychological Review). A region of the medial prefrontal
cortex (mPFC), the dorsal anterior cingulate cortex
(dACC), as well as the basal ganglia have consistently
been observed in both errors of commission and high
conflict.. Hence these regions serve as an important
testing ground for whether comparable monitoring
mechanisms are at play in language production. The
current study tests whether these regions are also
implicated in response to speech errors and high conflict
situations that precede the response. 32 native Dutch
subjects performed a tongue twister task and a factorial
combination of the Simon and Flanker task. In the tongue
twister task, participants overtly produced a string of
4 nonwords 3 times. In tongue twister trials (TT), the
onset phonemes followed a pattern of A-B-B-A, whereas
rhymes followed an A-B-A-B pattern (e.g. wep ruust
rep wuust). In non-tongue twister trials (nonTT), the
nonwords contained minimal phonological overlap
(e.g. jots brauk woelp zieg). These two conditions
correspond to a high conflict and a low conflict condition
respectively. In an arrow version of the the Simon-
Flanker task, subjects responded to the direction of a
middle arrow while flanking arrows faced in the same
(i.e., congruent; >>>>>) or different (i.e., incongruent;
>><>>) directions. These stimuli were presented either
on the right side or the left side of the screen, potentially
creating a spatial incongruency with their response
as well. Behavioral results demonstrated sensitivity
to conflict in both tasks, as subjects generated more
speech errors in tongue twister trials than non-tongue
twister trials, and were slower to incongruent relative
to congruent flanker trials. No difference between
spatial incongruency was observed. Neuroimaging
results showed that activation in the ACC significantly
increased in response to the high conflict flanker trials.
In addition, regions of interest analyses in the basal
ganglia showed a significant difference between correct
high and low conflict flanker trials in the left putamen
and right caudate nucleus. For the tongue twister task,
a large region in the mPFC - overlapping with the ACC
region from the flanker task - was significantly more
active in response to errors than correct trials. Significant
differences were also found in the left and right caudate
nuclei and left putamen. No differences were found
between correct TT and nonTT trials. The study therefore
provides evidence for overlap in monitoring between
language production and non-linguistic action at the
time of response (i.e. errors), but little evidence for preresponse
conflict engaging the same system. -
Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Are there shared mechanisms of response conflict monitoring in speech production and choice reaction tasks?. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Udden, J., Hulten, A., Fonteijn, H. M., Petersson, K. M., & Hagoort, P. (2014). The middle temporal and inferior parietal cortex contributions to inferior frontal unification across complex sentences. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2014). Taking the listener into account: Computing common ground requires mentalising. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
In order to communicate efficiently, speakers have to
take into account which information they share with their
addressee (common ground) and which information
they do not share (privileged ground). Two views
have emerged about how and when common ground
influences language production. In one view, speakers
take common ground into account early on during
utterance planning (e.g., Brennan & Hanna, 2009).
Alternatively, it has been proposed that speakers’ initial
utterance plans are egocentric, but that they monitor
their plans and revise them if needed (Horton & Keysar,
1996). In an fMRI study, we investigated which neural
mechanisms support speakers’ ability to take into account
common ground, and at what stage during speech
planning these mechanisms come into play. We tested
22 pairs of native Dutch speakers (20 pairs retained in
the analysis), who were assigned to the roles of speaker
or listener for the course of the experiment. The speaker
performed the experiment in the MRI scanner, while the
listener sat behind a computer in the MRI control room.
The speaker performed a communicative and a noncommunicative
task in the scanner. The communicative
task was a referential communication game in which
the speaker described objects in an array to the listener.
The listener could hear the speaker’s descriptions over
headphones and tried to select the intended object on
her screen using a mouse. We manipulated common
ground within the communicative task. In the privileged
ground condition, the speaker saw additional competitor
objects that were occluded from the listener’s point of
view. In order to communicate efficiently, the speaker
had to ignore the occluded competitor objects. In the
control conditions, all relevant objects were in common
ground. The non-communicative task was identical to
the communicative task, except that the speaker was
instructed to describe the objects without the listener
listening. When comparing the BOLD response during
speech planning in the communicative and the noncommunicative
tasks, we found activations in the right
medial prefrontal cortex and bilateral insula, brain areas
involved in mentalizing and empathy. These results
confirm previous neuroimaging research that found that
speaking in a communicative context as compared to a
non-communicative context activates brain areas that
are involved in mentalizing (Sassa et al., 2007; Willems
et al., 2010). We also contrasted brain activity in the
privileged ground and control conditions within the
communicative task to tap into the neural mechanisms
that allow speakers to take common ground into account.
We again found activity in brain regions involved in
mentalizing and visual perspective-taking (the bilateral
temporo-parietal junction and medial prefrontal cortex).
In addition, we found a cluster in the dorsolateral
prefrontal cortex, a brain area that has previously been
proposed to support the inhibition of task-irrelevant
perspectives (Ramsey et al., 2013). Interestingly, these
clusters are located outside the traditional language
network. Our results suggest that speakers engage in
mentalizing and visual perspective-taking during speech
planning in order to compute common ground rather
than monitoring and adjusting their initial egocentric
utterance plans. -
Willems, R. M., Frank, S., Nijhof, A., Hagoort, P., & van den Bosch, A. (2014). Prediction influences brain areas early in the neural language network. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Zaadnoordijk, L., Udden, J., Hulten, A., Hagoort, P., & Fonteijn, H. M. (2014). Between-subject variance in resting-state fMRI connectivity predicts fMRI activation in a language task. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Basnakova, J., Weber, K., Petersson, K. M., Hagoort, P., & Van Berkum, J. J. A. (2010). Understanding speaker meaning: Neural correlates of pragmatic inferencing in language comprehension. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
Abstract
Introduction: Natural communication is not only literal, but to a large extent also inferential. For example, sometimes people say "It is hard to give a good presentation" to actually mean "Your talk was a mess!", and listeners need to infer the speaker’s hidden message. In spite of the pervasiveness of this phenomenon in everyday communication, and even though the hidden meaning is often what it’s all about, very little is known about how the brain supports the comprehension of indirect language. What are the neural systems involved in the inferential process , and how are they different from those involved in word- and sentence-level meaning processing? We investigated the neural correlates of this so-called pragmatic inferencing in an fMRI study involving natural spoken dialogue. Methods: As a test case, we focused on the inferences needed to understand indirect replies. 18 native listeners of Dutch listened to dialogues ending in a question-answer (QA) pair. The final and critical utterance, e.g., "It is hard to give a good presentation", had different meanings depending on the dialogue context and the immediately preceding question: (1) Direct reply: Q: "How is it to give a good presentation?" A: "It is hard to give a good presentation" (2) Indirect reply, neutral: Q: "Will you give a presentation at the conference?" (rather than a poster) A: "It is hard to give a good presentation" (3) Indirect reply, face-saving: Q: "Did you like my presentation?" A: "It is hard to give a good presentation" While one of the indirect conditions was neutral, the other involved a socio-emotional aspect, as the reason for indirectness was to 'save one’s face' (as in excuses or polite refusals). Participants were asked to pay attention to the dialogues and, to ensure the latter, occasionally received a comprehension question (on filler items only). No other task demands were imposed. Results: Relative to direct replies in exchanges like (1), the indirect replies in exchanges like (2) and (3) activated brain structures associated with theory of mind and inferencing: right angular gyrus (TPJ), right DM prefrontal / frontal cortex (SMA, ACC). Both types of indirect replies also bilaterally activated the insula, an area known to be involved in empathy and affective processing. Moreover, both types of indirect replies recruited bilateral inferior frontal gyrus, thought to play a role in situation model updating. The comparison between neutral (2) and face-saving (3) indirect replies revealed that the presumed affective load of the face-saving replies activated just one additional area: right inferior frontal gyrus; we did not see any activation in classic affect-related areas. Importantly, we used the same critical sentences in all conditions. Our results can thus not be explained by lexico-semantic or other (e.g. syntactic, word frequency) factors. Conclusions: To extend neurocognitive research on meaning in language beyond the level of straightforward literal utterances, we investigated the neural correlates of pragmatic inferencing in an fMRI study involving indirect replies in natural spoken dialogue. Our findings reveal that the areas used to infer the intended meaning of an implicit message are partly different from the classic language network. Furthermore, the identity of the areas involved is consistent with the idea that inferring hidden meanings requires taking the speaker’s perspective. This confirms the importance of perspective taking in language comprehension, even in a situation where the listener is not the one addressed. Also, as the areas recruited by indirect replies generally do not light up in standard fMRI sentence comprehension paradigms, our study testifies to the importance of studying language understanding in richer contexts in which we can tap aspects of pragmatic processing, beyond the literal code.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=4501 -
Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
Abstract
Introduction: During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Theoretical accounts (e.g. Jackendoff, 2007; see also Hagoort, 2005) propose that unification operations occur in parallel at the phonological, syntactic and semantic levels. Meta-analysis of fMRI studies (Bookheimer, 2002) shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during the different types of unification operations. This raises the question of how the brain functionally segregates these different unification operations. Previously, we have established that semantic unification modulates oscillatory EEG activity in the gamma frequency range (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Hald, Bastiaansen, & Hagoort, 2005). More recently, we have shown that syntactic unification modulates MEG activity in the lower beta frequencies (13-18 Hz). Here we report a fully within-subjects replication of these findings. Methods: We recorded the EEG (64 channels, filtered from 0.1 - 100 Hz) of 30 subjects while they read sentences presented in serial visual presentation mode. Sentences were either correct (COR), contained a semantic violation (SEM), or a syntactic (grammatical gender agreement) violation (SYN). Two additional conditions were constructed on the basis of COR sentences by (1) replacing all the nouns, verbs and adjectves with semantically unrelated ones that were matched for length and frequency, making the sentences semantically ininterpretable (global semantic violation, GSEM, and (2) randomly re-assigning word order of the COR sentences, so as to remove overall syntactic structure from the sentences (global syntactic violation, GSYN). Here we only report the results of analyses on the COR, GSEM and GSYN conditions. EEG epochs from 1s preceding sentence onset to 6s after sentence onset (corresponding to the first 10 words in each sentence) were extracted from the EEG recordings, and epochs with artifacts were removed. A multitaper-based time-frequency (TF) analysis of power changes (Mitra & Pesaran, 1999) was performed, separately for a low-frequency window (1-30 Hz) and high-frequency window (25-100 Hz). Significant differences in the TF representations between any two conditions were established unsing non-parametric random permutation analysis (Maris & Oostenveld, 2007). Results: Semantic unification: gamma Figure 1 presents the comparison between the TF responses of the semantically intact condition (COR) and those of the semantically incorrect ones (GSEM, but also GSYN, since the absence of syntactic structure makes the sentence semantically uninterpretable as well). Both the COR-GSEM and the COR-GSYN contrasts show significantly larger power for the semantically correct sentences in a frequency range around 40 Hz (as well as some less consistent differences in higher frequencies). No differences were observed between GSEM and GSYN in the frequency range 25-100 Hz. Syntactic unification: beta Figure 2 presents the conparison between the TF responses of the syntactically correct conditions (COR and GSEM) and the incorrect one (GSYN). Both the COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. No significant differences were observed between COR and GSEM in the frequency range 1-30 Hz. Conclusions: During the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) slowly increase as the sentence unfolds. When a sentence is devoid of syntactic structure, the beta increase is absent. When a sentence is devoid of semantically co=herent structure, the gamma increase is absent. Together the data show a fully within-subjects confirmation of previously obtained results in separate experiments (for review, see Bastiaansen & Hagoort, 2006). This suggests that neuronal synchronization in LIFG at gamma frequencies is related to semantic unification, whereas synchronization at beta frequencies is related to syntactic unification. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations in language comprehension. References: Bastiaansen, M. (2006), 'Oscillatory neuronal dynamics during language comprehension.', Prog Brain Res, vol. 159, pp. 179-196. Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic processing', Annu Rev Neurosci, vol. 25, pp. 151-188. Hagoort, P. (2005), 'On Broca, brain, and binding: a new framework.', Trends Cogn Sci,, vol. 9, no. 9, pp. 416-423. Hagoort, p. (2004), 'Integration of word meaning and world knowledge in language comprehension', Science, vol. 304, no. 5669, pp. 438-441. Hald, L. (2005), 'EEG theta and gamma responses to semantic violations in online sentence processing', Brain & Language, vol. 96, no. 1, pp. 90-105.. Jackendoff, R. (2007), 'A Parallel Architecture perspective on language processing', Brain research, vol. 1146, pp. 2-22. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', J Neurosci Methods, vol. 164, no. 1, pp. 177-190. Mitra, P. (1999), 'Analysis of dynamic brain imaging data.', Biophys. J., vol. 76, no. 2, pp. 691-708.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=3659 -
Bastiaansen, M. C. M., & Hagoort, P. (2010). Frequency-based segregation of syntactic and semantic unification?. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
During language comprehension, word-level information has to be integrated (unified) into an overall message-level representation. Unification operations occur in parallel at the phonological, syntactic and semantic levels, and meta-analyses of fMRI studies shows that largely overlapping areas in left inferior frontal gyrus (LIFG) are activated during different unification operations. How does the brain functionally segregate these different operations? Previously we established that semantic unification modulates oscillatory EEG activity in the gamma frequency range, and that syntactic unification modulates MEG in the beta range. We propose that there is functional segregation of syntactic and semantic unification in LIFG based on frequency-coding. We report a within-subjects replication of the previous findings. Subjects read visually presented sentences that were either correct (COR), semantically incorrect (by replacing the nouns, verbs, adjectives of the COR sentences with semantically unrelated ones) or semantically and syntactically incorrect (by randomizing word order of the COR sentences). Time-frequency analysis of power was performed on EEG epochs corresponding to entire sentences. The COR-GSEM and the COR-GSYN contrasts show larger power for the semantically correct sentences in a frequency range around 40 Hz. . The COR-GSYN and the GSEM-GSYN contrasts show larger power in the 13-18 Hz frequency range for the syntactically correct sentences. In sum, during the comprehension of correct sentences, both low beta power (13-18 Hz) and gamma power (here around 40 Hz) increase. When a sentence is devoid of syntactic structure, the beta increase is absent, when there is no semantic structure the gamma increase is absent. Thus, our data are consistent with the notion of functional segregation through frequency-coding during unification operations. -
Folia, V., Hagoort, P., & Petersson, K. M. (2010). Broca's region: Implicit sequence learning and natural syntax processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
In an event-related fMRI study, we examined the overlap between the implicit processing of structured sequences, generated by a simple right-linear artificial unification grammar, with natural syntax related variability in the same subjects. Research investigating rule learning of potential linguistic relevance through artificial syntax often uses performance feedback and/or explicit instruction concerning the underlying rules. It is assumed that this approach ensures the right type of ''rule-following''because the rules are either explicitly provided to the subjects or explicitly discovered by the subjects during trial-and-error learning with feedback. In this work, we use a novel implicit preference classification task based on the structural mere exposure effect. Under conditions that in important respects are similar to those of natural language development (i. e., no explicit learning or teaching instruction, and no performance feedback), 32 subjects were exposed for 5 days to grammatical sequences during an immediate short-term memory task. On day 5, a preference classification test was administered, in which new sequences were presented. In addition, natural language data was acquired in the same subjects. Implicit preference classification was sensitive enough to show robust behavioral and fMRI effects. Preference classification of structured sequences activated Broca's region (BA 44/45) significantly, and was further activated by artificial syntactic violations. The effects related to artificial syntax in BA 44/45 were identical when we masked these with activity related to natural syntax processing. Moreover, the medial temporal lobe was deactivated during artificial syntax processing, consistent with the view that implicit processing does not rely on declarative memory mechanisms supported by the medial temporal lobe. In summary, we show that implicit acquisition of structured sequence knowledge results in the engagement of Broca's region during structured sequence processing. We conclude that Broca's region is a generic on-line sequence processor integrating information, in an incremental and recursive manner, independent of whether the sequences processed are structured by a natural or an artificial syntax. -
Franke, B., Rijpkema, M., Arias Vasquez, A., Veltman, J. A., Brunner, H. G., Hagoort, P., & Fernandez, G. (2010). Genome-wide association study of regional brain volume suggests involvement of known psychiatry candidate genes, identified new candidates for psychiatric disorders and points to potential modes of their action. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
Though most psychiatric disorders are highly heritable, it has been hard to identify genetic risk factors involved, which are most likely of small individual effect size. A possible way to aid identification of risk genes is the use of intermediate phenotypes. These are supposed to be closer to the biological substrate(s) of the disorder than psychiatric diagnoses, and therefore less genetically complex. Intermediate phenotypes can be defined e. g. at the level of brain function and of regional brain structure. Both are highly heritable, and regional brain structure is linked to brain function. Within the Brain Imaging Genetics (BIG) study at the Radboud University Nijmegen (Medical Centre) we performed a genome-wide association study (GWAS) in 1000 of the currently 1400 healthy study participants. For all BIG participants, structural MRI brain images were available. Gray and white matter volumes were determined by brain segmentation using SPM software. FSL-FIRST was used to assess volumes of specific brain structures. Genotyping was performed on Affymetrix 6.0 arrays. Results implicate known candidates from earlier GWAS and candidate gene studies in mental disorders in the regulation of regional brain structure. E. g. polymorphisms in CDH13, featuring among the top-findings of GWAS in disorders including ADHD, addiction and schizophrenia, were found associated with amygdala volume. The ADHD candidate gene SNAP25 was found associated with total brain volume. In conclusion, the use of intermediate phenotypes based on (subcortical) brain volumes may shed more light on pathways from genes to diseases, but can also be expected to facilitate gene identification in psychiatric disorders. -
Hagoort, P. (2010). Beyond Broca, brain, and binding. Talk presented at Symposium Marta Kutas. Nijmegen. 2010-05-19 - 2010-05-20.
-
Hagoort, P. (2010). Beyond the Language given: Language processing from an embrained perspective. Talk presented at Sissa colloquim. Trieste, Italy. 2010-12-13.
-
Hagoort, P. (2010). Breintaal. Talk presented at Club of Spinoza Prize winners. Rijnsburg, The Netherlands. 2010-12-01.
-
Hagoort, P. (2010). De talige netwerken in ons brein. Talk presented at the Wetenschappelijke Vergadering en Algemene Ledenvergadering van de Nederlandse Vereniging voor Neurologie (NVN). Amsterdam, The Netherlands. 2010-11-04 - 2010-11-04.
-
Hagoort, P. (2010). Communication beyond the language given. Talk presented at International Neuropsychological Symposium. Ischia(Italy). 2010-06-22 - 2010-06-26.
-
Hagoort, P. (2010). [Organizing committee and session chair]. Second Annual Neurobiology of Language Meeting [NCL 2010]. San Diego, CA, 2010-11-11 - 2010-11-12.
-
Hagoort, P. (2010). In gesprek met ons brein. Talk presented at Paradisolezingen 2010. Amsterdam. 2010-03-28.
-
Hagoort, P. (2011). Language processing: A disembodied perspective [Keynote lecture]. Talk presented at The Workshop Embodied & Situated Language Processing [ESLP 2010]. Bielefeld, Germany. 2011-08-25 - 2011-08-27.
-
Hagoort, P. (2010). The science of human nature. Talk presented at Anthos Conference. Noordwijk, The Netherlands. 2010-01-08.
-
Hagoort, P., Segaert, K., Weber, K. M., De Lange, F. P., & Petersson, K. M. (2010). The suppression of repetition enhancement: A review. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
Repetition suppression is generally accepted as the neural correlate of behavioural priming and is often used to selectively identify the neuronal representations associated with a stimulus. However, this does not explain the large number of repetition enhancement effects observed under very similar conditions. Based on a review of a large set of studies we propose several variables biasing repetition effects towards enhancement instead of suppression. On the one hand, there are stimulus variables which influence the direction of repetition effects: visibility, e. g. in the case of degraded stimuli perceptual learning occurs; novelty, e. g. in case of unfamiliar stimuli a novel network formation process occurs; and timing intervals, e. g. repetition effects are sensitive to stimulus onset asynchronies. On the other hand, repetition effects are not solely automatic processes, triggered by particular types or sequences of stimuli. The brain is continuously and actively filtering, attending to and interpreting the information provided by our senses. Consequently, internal state variables like attention, expectation and explicit memory modulate repetition effects towards enhancement versus suppression. Current models i.e. the accumulation, fatigue and sharpening models of repetition suppression have so far left out top-down factors and cannot or can only partially account for repetition enhancement effects. Instead we propose that models which incorporate both stimulus bottom-up and cognitive top-down factors are called for in order to better understand repetition effects. A good candidate is the predictive coding model in which sensory evidence is interpreted according to subjective biases and statistical accounts of past encounters. -
Hagoort, P. (2010). The modular ghost in the recurrent connection machine: Where is the modular mind in a brain full of recurrent connectivity?. Talk presented at The Modularity of Mind: Revisions and Prospects. Heinrich-Heine University Düsseldorf, Germany. 2010-10-29.
-
Händel, B., Van Leeuwen, T. M., Jensen, O., & Hagoort, P. (2010). Lateralization of alpha oscillations in grapheme-color synaesthetes suggests altered color processing. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
In grapheme-color synaesthesia, the percept of a particular grapheme causes additional experiences of color. To investigate this interesting integration of modalities, brain activity was recorded of 7 synaesthetes and matched controls using magnetoencephalography. Subjects had to report the color change of one of two letters presented left and right of a fixation cross. One of the letters was neutral (eliciting no color percept), the other one could either be neutral, colored or elicit synaesthesia (in synaesthetes). Additionally, the side of color change was validly or invalidly cued. As expected, in both subject groups 10 Hz alpha oscillations decreased contralateral to the attended side leading to an alpha lateralization. Additionally, controls as well as synaesthetes showed a stronger alpha reduction if the attended letter was colored indicating that color increased the attentional allocation. Interestingly, synaesthetes show the same effect of alpha decrease for synaesthetic color. While color on the attended side reduced alpha power in controls and synaesthetes, color on the unattended side only reduced alpha power in synaesthetes. Indeed, also psychophysical measures indicated changed processing in synaesthetes of unattended color stimuli. Only controls profited from the cue when attending the noncolor stimulus. Synaesthetes, however, performed worse if the noncolor stimulus was validly compared to invalidly cued. This means that synaesthetes performed better on the colored stimulus despite an invalid attentional cue. Changed alpha power lateralization and psychophysics due to unattended colorful input indicate that synaesthetes are more affected by color than controls. This might be due to increased attentional demand. -
Junge, C., Cutler, A., & Hagoort, P. (2010). Dynamics of early word learning in nine-month-olds: An ERP study. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
What happens in the brain when infants are learning the meaning of words? Only a few studies (Torkildsen et al., 2008; Friedrich & Friederici, 2008) addressed this question, but they focused only on novel word learning, not on the acquisition of infant first words. From behavioral research we know that 12-month-olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on (Schafer, 2005). What happens in the brain during such a training? With event-related potentials, we studied the effect of training context on word comprehension. We manipulated the type/token ratio of the training context (one versus six exemplars). 24 normal-developing Dutch nine-month-olds (+/- 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15-months-olds. All trials consisted of a high-resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training-test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture-word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13-month-olds. German 12-month-olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9-month-olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infant word recognition of novel exemplars. -
Junge, C., Hagoort, P., & Cutler, A. (2010). Early word learning in nine-month-olds: Dynamics of picture-word priming. Talk presented at 8th Sepex conference / 1st Joint conference of the EPS and SEPEX. Granada, Spain. 2010-04.
Abstract
How do infants learn words? Most studies focus on novel word learning to address this question. Only a few studies concentrate on the stage when infants learn their first words. Schafer (2005) showed that 12‐month‐olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on. What happens in the brain during such a training? With event‐related potentials, we studied the effect of training context on word comprehension. 24 Normal‐developing Dutch nine‐month‐olds (± 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15‐months‐olds. All trials consisted of a high‐resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training‐test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. We manipulated the type/token ratio of the training context (one versus six exemplars). Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture‐word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13‐month‐olds. German 12‐month‐olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9‐month‐olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infants’ word recognition of novel exemplars. -
Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability and later language development: Insight from ERP's. Talk presented at Child Language Seminar 2010. London. 2010-06-24 - 2010-06-26.
-
Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability is related to later word processing skill. Poster presented at XVIIIth Biennial International Conference on Infant Studies, Baltimore, MD.
-
Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This reference is then mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task, we distinguished sets of regions sensitive to these two steps in speaking. Encoding reference involved the bilateral inferior parietal lobes (BA 39) and right inferior frontal gyrus (BA 45), suggesting a role in constructing a non-linguistic mental representation. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) were sensitive to both sense and reference processing. These regions thus seem to support semantic encoding, the process of mapping reference onto sense. Left inferior frontal gyrus (BA 45), left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed adaptation to sense, and therefore appear sensitive to the output of semantic encoding. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies. -
Menenti, L., Petersson, K. M., & Hagoort, P. (2010). From reference to sense: An fMRI adaptation study on semantic encoding in language production. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
Abstract
Speaking is a complex, multilevel process, in which the first step is to compute the message that can be syntactically and phonologically encoded. Computing the message requires constructing a mental representation of what we want to express (the reference). This referent is mapped onto linguistic concepts stored in memory, by which the meaning of the utterance (the sense) is constructed. So far, one study targeted semantic encoding in sentence production (Menenti, Segaert & Hagoort, submitted) and none dissected this process further. We used fMRI adaptation to investigate brain areas sensitive to reference and sense in overt speech. fMRI adaptation is a phenomenon whereby repeating a stimulus property changes the BOLD-response in regions sensitive to that property. By independently manipulating repetition of reference and sense across subsequently produced sentences in a picture description task we distinguished sets of areas sensitive to these steps in semantic encoding in speaking. Methods: In a picture description paradigm, the described situation (the reference) and the linguistic semantic structure (the sense) of subsequently produced sentences were independently repeated across trials. Participants described pictures depicting events involving transitive verbs such as hit, kiss, greet, and two actors colored in different colors with sentences such as ‘The red man greets the green woman’. In our factorial design, the same situation involving the same actors could subsequently be described by two different sentences (repeated reference, novel sense) or the same sentence could subsequently be used to describe two different situations (novel reference, repeated sense). For reference, we controlled for the repetition of actors. For sense, we controlled for the repetition of individual words. See figure 1 for design and stimuli. To correct for increased movement and susceptibility artifacts due to speech, we scanned using 3T-fMRI parallel-acquired inhomogeneity-desensitized fMRI (Poser, Versluis, Hoogduin et al. 2006). Five images were acquired per TR and combined based on local T2* (Buur, Poser and Norris 2009). Results: The behavioral data (response onset, response duration and total time to complete the responses) showed effects of both sense and reference. In the fMRI analyses we looked for areas sensitive to only sense, only reference, or showing a conjunction of both factors. Encoding reference involved the bilateral inferior parietal lobes (BA 39), which showed repetition suppression, and right inferior frontal gyrus (BA 45), which showed repetition enhancement. Left inferior frontal gyrus (BA 45) showed suppression to repetition of sense, while left middle frontal gyrus (BA44) and left angular gyrus (BA 39) showed enhancement. Left middle frontal gyrus (BA 6), bilateral superior parietal lobes and bilateral posterior temporal gyri (BA 37)) showed repetition suppression to both sense and reference processing (conjunction analysis with conjunction null). See figure 2 for the results (p<.05 FWE corrected for multiple comparisons at cluster-level, maps thresholded at p<.001 uncorrected voxel-level.) Conclusions: The input to semantic encoding is construction of a referent, a mental representation that the utterance is about. The bilateral temporo-parietal junctions are involved in this process as they show sensitivity to repetition of reference but not sense. RIFG shows enhancement and may therefore be involved in constructing a more comprehensive model spanning several utterances. Semantic encoding itself requires mapping of the reference onto the sense. This involves large parts of the language network: bilateral posterior temporal lobes and upper left inferior frontal gyrus were sensitive to both reference and sense. Finally, sense recruits left inferior frontal gyrus (BA 45). This area is sensitive to syntactic encoding (Bookheimer 2002), the next step in speaking. These results reveal the neural architecture for the first steps in producing an utterance. In addition, they show the feasibility of studying overt speech at a detailed level of analysis in fMRI studies. References: Bookheimer, S. (2002), 'Functional MRI of language: new approaches to understanding the cortical organization of semantic procesing', Annual review of neuroscience, vol. 25, pp. 151-188. Buur, P. (2009), 'A dual echo approach to removing motion artefacts in fMRI time series', Magnetic Resonance in Medicine, vol. 22, no. 5, pp. 551-560. Menenti, L. (submitted), 'The neuronal infrastructure of speaking'. Poser, B. (2006), 'BOLD contrast sensitivity enhancement and artifact reduction with multiecho EPI: parallel-acquired inhomogeneity desensitized fMRI', Magnetic Resonance in Medicine, vol. 55, pp. 1227-1235.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=2422 -
Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
Abstract
Introduction: Identification of the neural signature of a concept is a key challenge in cognitive neuroscience. In recent years, a number of studies have demonstrated the possibility to decode conceptual information from spatial patterns in functional MRI data (Hauk et al., 2008; Shinkareva et al., 2008). An important unresolved question is whether similar decoding performance can be attained using electrophysiological measurements. The development of EEG-based concept decoding algorithms is interesting from an applications perspective, because the high temporal resolution of the EEG allows pattern recognition in real-time. In this study we investigate the possibility to identify conceptual representations from event-related EEG on the basis of the presentation of an object in three different modalities: an object’s written name, it’s spoken name and it’s line drawing. Methods: Twenty-four native Dutch speakers participated in the study. They were presented concepts from three semantic categories: two relevant categories (animals, tools) and a task category. There were four concepts per category, all concepts were presented in three modalities: auditory, visual (line drawings) and textual (written Dutch words). Each item was repeated 80 times (relevant), or 16 times (task) in each modality. The text and picture stimuli were presented for 300 ms. The interval between stimuli had a random duration between 1000-1200 ms. Participants were instructed to respond upon appearance of items from the task category. Continuous EEG was registered using a 64-channel system. The data were divided into epochs of one second starting 300 ms before stimulus onset. We used the time domain representation of the signal as input to the classifier (linear support vector machine, Vapnik, 2000). The classifier was trained to identify which of two semantic categories (animal or tool) was presented to subject. Performance of the classifier was computed as the proportion of correctly classified trials. Significance of the classification outcome was computed using a binomial test (Burges, 1998). In the first analysis we classified the semantic category of stimuli from the entire dataset, with trials of all modalities equally presented. In the second analysis we classified trials within each modality separately. In the third analysis we compared classification performance for the real categories with the classification performance for pseudo-categories to investigate the role of perceptual features of presented objects without transparent contribution of conceptual information. The pseudo-categories were composed by arranging all the concepts into classes randomly in a way that each class contained exemplars of both categories. Results: In the first analysis we assessed the ability to discriminate patterns of EEG signals referring to the representation of animals versus tools across three tested modalities. Significant accuracy was achieved for nineteen out of twenty subjects. The highest achieved classification accuracy across modalities was 0.69 with a mean value of 0.61 over all 20 subjects. To check whether the performance of the classifier was consistent during the experimental session, we visualized the correctness of the classifier’s decisions over the time-course of the session. Fig 1 shows that the classifier identifies more accurately the trials correspond to the picture blocks than the trials of text and audio blocks (Fig.1). To further assess the modality-specific classification performance, we trained and tested the classifiers within each of the individual modalities separately (Fig. 2). For pictures, the highest classification accuracy reached over all subjects was 0.92, and classification was significant (p<0.001) for all 20 subjects with a mean value of 0.80. The classifier for the auditory modality performed significantly better than chance (p<0.001 and p<0.01) in 15 out of 20 subjects with a mean value of 0.60. The classifier for the orthographic modality performed significantly better than chance in 5 out of 20 subjects, with a mean value of 0.56. Comparison of the classification performance for real- and pseudo-categories revealed a high impact of the conceptually driven activity on the classifier’s performance (Fig 3). Mean accuracies of pseudo-category classification over all subjects were 0.56 for pictures, 0.56 for audio, and 0.55 for text. Significant (p<0.005) differences form the real-categories results were found for all pseudo-categories in the picture modality; for eight out of ten pseudo-categories in the auditory modality, and for one out of ten pseudo-categories in the orthographic modality. Conclusions: The results uncover that stable neural patterns induced by the presentation of stimuli of different categories can be identified by EEG. High classification performances were achieved for all subjects. The visual modality appeared to be much easier to classify than the other modalities. This indicates the existence of category-specific patterns in visual recognition of objects (Kiefer 2001; Liu et al., 2009). Currently we are working towards interpreting the patterns found during classification using Bayesian logistic regression. A considerable reduction of performance has been found when using pseudo-categories instead of the real categories. This indicated that the classifier has identified neural activity at the level of conceptual representations. Our results could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept encoding in the context of brain-computer interface applications. References: Burges, C. (1998), 'A tutorial on support vector machines for pattern recognition', Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121-167. Hauk, O. (2008), 'Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging', European Journal Neuroscience, vol. 27, no. 7, pp. 1856-66. Kiefer, M. (2001), 'Perceptual and semantic sources of category-specific effects: Event-Related potentials during picture and word categorization', Memory and Cognition, vol. 29, no. 1, pp. 100-16. Liu, H. (2009), 'Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex', Neuron, vol. 62, no. 2, pp. 281-90. Shinkareva, S. (2008), 'Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings', Plos One, vol. 3, no. 1, pp. e1394.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=3333 -
van Leeuwen, T. M., Den Ouden, H. E., & Hagoort, P. (2010). Bottom-up versus top-down: Effective connectivity reflects individual differences in grapheme-color synesthesia. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
In grapheme-color synesthesia, letters elicit a color. Neural theories propose that synesthesia is due to changes in connectivity between sensory areas. However, no studies on functional connectivity in synesthesia have been published to date. Here, we applied psycho-physiological interactions (PPI) and dynamic causal modeling (DCM) in fMRI to assess connectivity patterns in synesthesia. We tested whether synesthesia is mediated by bottom-up, feedforward connections from grapheme areas directly to perceptual color area V4, or by top-down feedback connections from the parietal cortex to V4. We took individual differences between synesthetes into account: 'projector'synesthetes experience their synesthetic color in a spatial location, while 'associators'only have a strong association of the color with the grapheme. We included 19 grapheme-color synesthetes (14 projectors, 5 associators) and located group effects of synesthesia in left superior parietal lobule (SPL) and right color area V4. With PPI, taking SPL as a seed region, we found an increase in functional coupling with visual areas (also V4), for the synesthesia condition. With PPI, however, we can not determine the direction of this functional coupling. Based on the GLM results, we specified 2 DCMs to test whether a bottom-up or a top-down model would provide a better explanation for synesthetic experiences. Bayesian Model Selection showed that overall, neither model was much more likely than the other (exceedance probability of 0.589). However, when the models were divided according to projector or associator group, BMS showed that the bottom-up, feedforward model had an exceedance probability of 0.98 for the projectors: it was strongly preferred for this group. The top-down, feedback model was preferred for the associator group (exceedance probability = 0.96). To our knowledge, we are the first to report empirical evidence of changes in functional and effective connectivity in synesthesia. Whether bottom-up or top-down mechanisms underlie synesthetic experiences has been a long-time debate: that different connectivity patterns can explain differential experiences of synesthesia may greatly improve our insight in the neural mechanisms of the phenomenon. -
Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, spain.
Abstract
Introduction: When a 6-year-old girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. Besides linguistic content, the voice also carries information about a person's identity relevant for communication, such as idiosyncratic features related to the gender and age of the speaker (Campanella 2007). A previous ERP study investigated inter-individual differences in the cognitive processes that mediate the integration of social information in a linguistic context (Van den Brink submitted). Individuals with an empathizing-driven cognitive style showed larger ERP effects to mismatching information about the speaker than individuals who empathize to a lesser degree. The present ERP study tested individuals with Autism Spectrum Disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Methods: Participants. The ERP experiment was conducted with 20 Dutch adult males clinically diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women. Materials. Experimental materials consisted of 160 Dutch sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Translated examples of speaker identity (SI) incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an 'upper-class' accent. In addition, participants heard 48 sentences containing classic lexical semantic (LS) anomalies which are pure linguistic violations, known to elicit an N400, matched with semantically congruent sentences (e.g., "You wash your hands with horse and water" vs. "You wash your hands with soap and water"). Procedure. Participants listened to 352 sentences, spoken by 21 different people. They were asked to indicate after each sentence how odd they thought the sentence was, using a 5-point-scale ranging from "perfectly normal" to "extremely odd". Participants filled out Dutch translations of the Autism and Empathizing Questionnaires (AQ: Baron-Cohen 2001; EQ: Baron-Cohen 2004). EEG recording. EEG was recorded from 28 electrodes referenced to the left mastoid. Electrode impedances were below 5 kOhm. Signals were recorded using a 200 Hz low-pass filter, a time constant of 10 sec., and a 500 Hz sampling frequency. After off-line re-referencing of the EEG signals to the mean of the left and right mastoid, they were filtered with a 30 Hz low-pass filter. Segments ranging from 200 ms before to 1500 ms after the acoustic onset of the critical word were baseline-corrected. Segments containing artifacts were rejected (12.7%). Results: Behavioral results. EQ scores differed significantly between groups (p < .001), with average scores of 22.1 for ASD, 40.6 for men, and 52.1 for women. Statistical analysis of the rating data (see Figure 1) consisted of ANOVAs with the within-subject factors Manipulation (LS, SI) and Congruity (congruent, incongruent), and the between-subject factor Group (ASD, men, women). A significant interaction between Manipulation and Group (p < .01) indicated that the participant groups rated the items differently. For the LS items, a main effect of Congruity (p < .001), but no interaction of Congruity by Group (F < 1) was obtained. For the SI items a main effect of Congruity (p < .001), as well as an interaction of Congruity by Group (p < .01) was found. The ASD group rated the SI violations as less odd than the male and female participant group (2.9 versus 3.4 and 3.7, respectively). In addition, significant positive correlations with EQ score were found for SI effect size (see Figure 2) as well as SI violations (both p < .01). ERP results. Figure 3 displays the ERP waveforms for the three participant groups. Mean amplitude values in the N400 and Late Positive Component latency ranges (300-600 and 700-1000 ms) from 7 centro-parietal electrodes did not reveal a Congruity by Group interaction. However, a significant correlation was found between the size of the SI effect in the N400 latency window and EQ score (p < .01), with individuals who scored high on EQ showing a larger positive effect. Participants were subdivided into three groups based on EQ score; low empathizers (M = 20; 16 ASD, 2 men), medium empathizers (M = 37; 4 ASD, 12 men, 2 women), and high empathizers (M = 53; 8 men, 10 women). See Figure 4 for the SI difference waveforms for the three EQ groups. Individuals who empathize to a larger degree show an earlier and significantly larger positive effect (p < .05), related to decision making than low empathizers (i.e. mostly individuals with ASD). Conclusions: Our results evidently show that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results reveal that individuals who scored low on the EQ had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller positive ERP effect, which has been related to decision-making processing (Nieuwenhuis 2005). We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, as indexed by the behavioral and electrophysiological results for the lexical semantic manipulation. However, differences in onset latency, as well as size of the late positive effect in the speaker identity manipulation, suggest that they do have difficulties with assigning value to social information in language processing. References: Baron-Cohen, S. (2001), 'The Autism spectrum Quotient (AQ): Evidence from Asperger Syndrome/High Functioning Autism, males and females, scientists and mathematicians', Journal of Autism and Developmental Disorders, vol. 31, pp. 5-17. Baron-Cohen, S. (2004), 'The Empathy Quotient: An investigation of adults with Asperger Syndrome or High Functioning Autism, and normal sex differences', Journal of Autism and Developmental Disorders, vol. 34, pp. 163-175. Campanella, S. (2007), 'Integrating face and voice in person perception', Trends in Cognitive Sciences, vol. 11, no. 12, pp. 535-543. Nieuwenhuis, S. (2005), 'Decision making, the P3, and the locus coeruleus-norepinephrine system', Psychological Bulletin, vol. 131, no. 4, pp. 510-532. Van den Brink, D. (submitted), 'Empathy matters: ERP evidence for inter-individual differences in social language processing'.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=2807 -
Van den Brink, D., Van Berkum, J. J. A., Buitelaar, J., & Hagoort, P. (2010). Empathy matters for social language processing: ERP evidence from individuals with and without autism spectrum disorder. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
When a young girl claims that she cannot sleep without her teddy bear, hardly anybody will look surprised. However, when an adult man says the same thing, this is bound to raise some eyebrows. A previous ERP study revealed that individual differences in empathizing affects integration of this type of extra-linguistic, social, information in a linguistic context. The present ERP study tested individuals with autism spectrum disorder (ASD) to investigate verbal social information processing in a clinical population that is impaired in social interaction. Twenty adult males diagnosed with ASD (verbal IQ > 100), 22 healthy men and 12 healthy women participated. Experimental materials consisted of sentences with a lexical content that either did or did not fit probabilistic inferences about the speaker's sex, age, and social-economic status, as could be inferred from the speaker's voice. Examples of speaker identity incongruent utterances are "Before I leave I always check whether my make up is still in place", in a male voice, "Every evening I drink some wine before I go to sleep" in a young child's voice, and "I have a large tattoo on my back" spoken in an "upper-class" accent. In addition, we included a pure linguistic, lexical semantic manipulation (e. g., "You wash your hands with soap/horse and water"). Participants indicated after each spoken sentence, using a five-point scale, how odd they thought the sentence was, while their EEG was recorded. They also filled out a questionnaire on their empathizing ability. Our results reveal that empathy matters for verbal social information processing, but not for lexical semantic processing. Behavioral results show that individuals who scored low on empathizing ability had more difficulties detecting violations of speaker and message. At the neuronal level, individuals who empathize to a lesser degree showed a delayed onset of, as well as a smaller, positive ERP effect, which can be related to decision-making processes. We conclude that high-functioning individuals with ASD, who demonstrate low empathizing abilities, do not experience problems in pure linguistic processing, but that they do have difficulties with assigning value to social information in language processing. -
Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Beta oscillation relates with the Event Related Field during language processing. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
Abstract
Introduction: MEG has the advantage of both a high temporal and a spatial resolution in measuring neural activity. The event-related field (ERF) have been extensively explored in psycholinguistic research. For example, the N400m was found to be sensitive to semantic violations (Helenius, 2002). On the other hand, induced oscillatory responses of the EEG and MEG during langauge comprehension are less commonly investigated. Oscillatory dynamics have been shown to also contain relevant information, which can be measured amongst others by time-frequency (TF) analyses of power and /or coherence changes (Bastiaansen & Hagoort, 2006; Weiss et al., 2003). In the present study we explicitly investigate whether there is a (signal-analytic) relationship between MEG oscillatory dynamics (notably power changes) and the N400m. Methods: There were two types of auditory sentences, in which the last words were either semantically congruent (C) or incongruent (IC) with respect to the sentence context. MEG signals were recorded with a 151 sensor CTF Omega System, and MRIs were obtained with a 1.5 T Siemens system. We segmented the MEG data into trials starting 1 s before and ending 2 s after the onset of the critical words. The ERFs were calculated by averaging over trials separately for two conditions. The time frequency representations (TFRs) of the single trials were calculated using a Wavelet technique, after which the TFRs were averaged over trials for both conditions. A cluster-based random permutation test (Maris & Oostenveld, 2007) was used to assess the significance of the difference between the two conditions, both for the ERFs and the TFRs. In order to characterize the relationship between beta power (see results) and N400m, we performed a linear regression analysis between beta power and N400m for the sensors that showed significant differences in ERFs or TFRs between the two conditions. In the end, a beamforming approach [Dynamic Imaging of Coherent Sources (DICS)] was applied to identify the sources of the beta power changes. Results: The ERF analysis showed that approximately between 200ms and 700ms after the onset of the critical words, the IC condition elicited larger amplitudes than the C condition over bilateral temporal areas, with a clear left hemisphere preponderance (Fig. 1A). Statistical analysis revealed significant differences over the left temporal area (Fig. 1B). In a similar time window (200 - 700ms), a beta power suppression (16 - 19 Hz) was found only for the IC condition, but not for the C condition (Fig. 2A). The statistical analysis of the beta power difference between the two conditions revealed a significantly lower beta power for the IC than C condition over left temporal cortex (Fig. 2B). The comparable topographies for N400m and beta differences suggest a relationship between these two effects. In order to evaluate this relationship, we performed a linear regression between beta power and N400m for both IC and C conditions in both the post-stimulus time window (200 - 700ms) and the pre-stimulus time window (-600 - -200ms). In the time window of 200 - 700ms, we found a positive linear regression between beta power and N400m for the IC condition (R = .32, p = .03) but not for the C condition (p = .83). For the IC condition, we found that the lower the beta power, the lower the N400m amplitude. In the time window of -600 - -200ms, the C condition showed a positive linear regression between beta power and N400m (R = .27, p = .06), but the IC condition did not show this (p = .74). The source modeling analysis allows us to estimate the generators of the beta suppression for the IC relative to C condition. The source of the beta suppression (around 18 Hz) within 200 - 700 ms was identified in the left inferior frontal gyrus (LIFG, BA 47) (Fig. 3). Conclusions: The ERF difference between the two conditions is consistent with previous MEG studies. However, it is the first time that the beta power suppression is related with the amplitude of the N400m. When the input is highly predictable (C condition), the lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while the low predictability (IC condition) of the input produced an association between the N400m and the beta power in the post-stimulus interval. Moreover, the generator of the beta suppression was identified in the LIFG, which has been related to semantic unification (Hagoort, 2005). Together with other studies on the role of beta oscillations across a range of cognitive functions (Pfurtscheller, 1996; Weiss, 2005; Hirata, 2007; Bastiaansen, 2009), we propose that beta oscillations generally reflect the engagement of brain networks: a lower beta power indicates a higher engagement for information processing. References: Bastiaansen, M. (2009), ''Oscillatory brain dynamics during language comprehension', Event-Related Dynamics of Brain Oscillations, vol. 159, pp. 182-196. Bastiaansen, M. (2009), ''Syntactic Unification Operations Are Reflected in Oscillatory Dynamics during On-line Sentence Comprehension', Journal of Cognitive Neuroscience, vol. doi: 10.1162/jocn.2009.21283, pp. 1-15. Hagoort, P. . (2005), 'On Broca, brain, and binding: a new framework', Trends in Cognitive Sciences, vol. 9, no. 9, pp. 416-423. Helenius, P. (2002), ''Abnormal auditory cortical activation in dyslexia 100 msec after speech onset', Journal of Cognicition Neuroscience, vol. 14, pp. 603-617. Hirata, M. (2007), 'Effects of the emotional connotations in words on the frontal areas — a spatially filtered MEG study', NeuroImag, vol. 35, pp. 420–429. Maris, E. (2007), 'Nonparametric statistical testing of EEG- and MEG-data', Journal of Neuroscience Methods, vol. 164(1), no. 15, pp. 177-190. Pfurtscheller, G. (1996), 'Post-movement beta synchronization. A correlate of an idling motor area?', Electroencephalography and Clinical Neurophysiology, vol. 98, pp. 281–293. Weiss, S. (2003), 'The contribution of EEG coherence to the investigation of language', Brain and language, vol. 85, pp. 325-343. Weiss, S. (2005), 'Increased neuronal communication accompanying sentence comprehension', International Journal of Psychophysiology, vol. 57, pp. 129-141.Additional information
http://ww3.aievolution.com/hbm1001/index.cfm?do=abs.viewAbs&abs=1793 -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2010). "Chomsky illusion"? ERP evidence for the influence of information structure on syntactic processing. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
-
Wang, L., Bastiaansen, M. C. M., Jensen, O., Hagoort, P., & Yang, Y. (2010). Modulation of the beta rhythm during language comprehension. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
Event-related potentials and fields have been extensively explored in psycholinguistic research. However, relevant information might also be contained in induced oscillatory brain responses. We used magnetoencephalograhy (MEG) to explore oscillatory responses elicited by semantically incongruent words in a classical sentence comprehension paradigm. Sentences in which the last word was either semantically congruent or incongruent with respect to the sentence context were presented auditorily. Consistent with previous studies a stronger N400m component was observed over left temporal areas in response to incongruent compared to congruent sentence endings. At the same time, the analysis of oscillatory activity showed a larger beta power decrease (16-19 Hz) for the incongruent than congruent condition in the N400m time window (200-700ms), also over the left temporal area. The relationship between the beta decrease and the N400m was confirmed by a linear regression analysis. Moreover, using a beamforming approach we localized the sources of the beta decrease to the left prefrontal cortex (BA47). We propose that the beta oscillation reflects the engagement of brain networks. A lower beta power indicates a higher engagement for information processing. When the input is highly predictable (congruent condition), a lower beta power in the pre-stimulus interval predicts a better performance (smaller N400m); while a low predictability (incongruent condition) of the input shows a relationship between the N400m and the beta power in the post-stimulus interval, which indicates the engagement of the brain networks for integrating the unexpected information. This 'engagement'hypothesis is also compatible with reported beta effects in other cognitive domains. -
Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.
Abstract
Although language is an effective means of communication, it is unclear how linguistic and communicative abilities relate to each other. In communicative message generation, perspective taking or mentalizing are involved. Some researchers have argued that mentalizing depends on language. In this study, we directly tested the relationship between cerebral structures supporting communicative message generation and language abilities. Healthy participants were scanned with fMRI while they participated in a verbal communication paradigm in which we independently manipulated the communicative intent and linguistic difficulty of message generation. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings indicate that communicative and linguistic abilities rely on different neuro-cognitive architectures. We suggest that the generation of utterances with communicative intent relies on our ability to deal with mental states of other people ("mentalizing"), which seems distinct from language. -
Zhu, Z., Wang, S., Hagoort, P., Feng, G., Chen, H.-C., & Bastiaansen, M. C. M. (2010). Inferior frontal gyrus is activated during sentence-level semantic unification in both explicit and implicit reading tasks. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
-
Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.
-
Zhu, Z., Wang, S., Bastiaansen, M. C. M., Petersson, K. M., & Hagoort, P. (2010). Trial-by-trial coupling of concurrent EEG and fMRI identifies BOLD correlates of the N400. Poster presented at The Second Annual Neurobiology of Language Conference [NLC 2010], San Diego, CA.
Share this page