Displaying 1 - 55 of 55
-
Acheson, D. J., & Hagoort, P. (2014). Twisting tongues to test for conflict monitoring in speech production. Frontiers in Human Neuroscience, 8: 206. doi:10.3389/fnhum.2014.00206.
Abstract
A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task): the stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN). The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks. Electroencephalography (EEG) was recorded while participants performed a tongue twister (TT) task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows. Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor -
Basnakova, J., Weber, K., Petersson, K. M., Van Berkum, J. J. A., & Hagoort, P. (2014). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572-2578. doi:10.1093/cercor/bht112.
Abstract
Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.Additional information
http://cercor.oxfordjournals.org/content/early/2013/05/02/cercor.bht112/suppl/D… -
Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Hoogman, M., Arias Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H. G., Van Bokhoven, H., Franke, B., Hegenscheid, K., Homuth, G., Fisher, S. E., Grabe, H. J., Francks, C., & Hagoort, P. (2014). A genome wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus. Genes, Brain and Behavior, 13, 675-685. doi:10.1111/gbb.12157.
Abstract
Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical surface area and thickness, are heritable. To identify genetic variants that affect HG morphology, we conducted a genome-wide association scan (GWAS) meta-analysis in 3054 healthy individuals using HG surface area and thickness as quantitative traits. None of the single nucleotide polymorphisms (SNPs) showed association P values that would survive correction for multiple testing over the genome. The most significant association was found between right HG area and SNP rs72932726 close to gene DCBLD2 (3q12.1; P=2.77x10(-7)). This SNP was also associated with other regions involved in speech processing. The SNP rs333332 within gene KALRN (3q21.2; P=2.27x10(-6)) and rs143000161 near gene COBLL1 (2q24.3; P=2.40x10(-6)) were associated with the area and thickness of left HG, respectively. Both genes are involved in the development of the nervous system. The SNP rs7062395 close to the X-linked deafness gene POU3F4 was associated with right HG thickness (Xq21.1; P=2.38x10(-6)). This is the first molecular genetic analysis of variability in HG morphology -
Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 1726-1741. doi:10.1037/a0036281.
Abstract
Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production -
Cristia, A., Seidl, A., Junge, C., Soderstrom, M., & Hagoort, P. (2014). Predicting individual variation in language from infant speech perception measures. Child development, 85(4), 1330-1345. doi:10.1111/cdev.12193.
Abstract
There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribute to the prediction of communicative disorders. A qualitative, systematic review of this emergent literature illustrated the variety of approaches that have been used and highlighted some conceptual problems regarding the measurements. A quantitative analysis of the same data established that the bivariate relation was significant, with correlations of similar strength to those found for well-established nonlinguistic predictors of language. Further exploration of infant speech perception predictors, particularly from a methodological perspective, is recommended. -
Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.Abstract
Numerous experiments show that space and musical pitch are
closely linked in people's minds. However, the exact nature of
space-pitch associations and their neuronal underpinnings are
not well understood. In an fMRI experiment we investigated
different types of spatial representations that may underlie
musical pitch. Participants judged stimuli that varied in
spatial height in both the visual and tactile modalities, as well
as auditory stimuli that varied in pitch height. In order to
distinguish between unimodal and multimodal spatial bases of
musical pitch, we examined whether pitch activations were
present in modality-specific (visual or tactile) versus
multimodal (visual and tactile) regions active during spatial
height processing. Judgments of musical pitch were found to
activate unimodal visual areas, suggesting that space-pitch
associations may involve modality-specific spatial
representations, supporting a key assumption of embodied
theories of metaphorical mental representation. -
Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.
Abstract
The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition. -
Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.
Abstract
Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries -
Hagoort, P. (2014). Introduction to section on language and abstract thought. In M. S. Gazzaniga, & G. R. Mangun (
Eds. ), The cognitive neurosciences (5th ed., pp. 615-618). Cambridge, Mass: MIT Press. -
Hagoort, P., & Levinson, S. C. (2014). Neuropragmatics. In M. S. Gazzaniga, & G. R. Mangun (
Eds. ), The cognitive neurosciences (5th ed., pp. 667-674). Cambridge, Mass: MIT Press. -
Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.
Abstract
Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication. -
Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.
Abstract
A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.Additional information
http://www.annualreviews.org/doi/suppl/10.1146/annurev-neuro-071013-013847 -
Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.Abstract
The use of virtual reality (VR) as a methodological tool is
becoming increasingly popular in behavioural research due
to its seemingly limitless possibilities. This new method has
not been used frequently in the field of psycholinguistics,
however, possibly due to the assumption that humancomputer
interaction does not accurately reflect human-human
interaction. In the current study we compare participants’
language behaviour in a syntactic priming task with human
versus avatar partners. Our study shows comparable priming
effects between human and avatar partners (Human: 12.3%;
Avatar: 12.6% for passive sentences) suggesting that VR is a
valid platform for conducting language research and studying
dialogue interactions. -
Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.
Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. -
Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.
Abstract
Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization. -
Levy, J., Hagoort, P., & Démonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipitotemporal junction. Human Brain Mapping, 35, 5847-5860. doi:10.1002/hbm.22589.
Abstract
Morphology is the aspect of language concerned with the internal structure of words. In the past decades, a large body of masked priming (behavioral and neuroimaging) data has suggested that the visual word recognition system automatically decomposes any morphologically complex word into a stem and its constituent morphemes. Yet the reliance of morphology on other reading processes (e.g., orthography and semantics), as well as its underlying neuronal mechanisms are yet to be determined. In the current magnetoencephalography study, we addressed morphology from the perspective of the unification framework, that is, by applying the Hold/Release paradigm, morphological unification was simulated via the assembly of internal morphemic units into a whole word. Trials representing real words were divided into words with a transparent (true) or a nontransparent (pseudo) morphological relationship. Morphological unification of truly suffixed words was faster and more accurate and additionally enhanced induced oscillations in the narrow gamma band (60–85 Hz, 260–440 ms) in the left posterior occipitotemporal junction. This neural signature could not be explained by a mere automatic lexical processing (i.e., stem perception), but more likely it related to a semantic access step during the morphological unification process. By demonstrating the validity of unification at the morphological level, this study contributes to the vast empirical evidence on unification across other language processes. Furthermore, we point out that morphological unification relies on the retrieval of lexical semantic associations via induced gamma band oscillations in a cerebral hub region for visual word form processing. -
Schoot, L., Menenti, L., Hagoort, P., & Segaert, K. (2014). A little more conversation - The influence of communicative context on syntactic priming in brain and behavior. Frontiers in Psychology, 5: 208. doi:10.3389/fpsyg.2014.00208.
Abstract
We report on an fMRI syntactic priming experiment in which we measure brain activity for participants who communicate with another participant outside the scanner. We investigated whether syntactic processing during overt language production and comprehension is influenced by having a (shared) goal to communicate. Although theory suggests this is true, the nature of this influence remains unclear. Two hypotheses are tested: i. syntactic priming effects (fMRI and RT) are stronger for participants in the communicative context than for participants doing the same experiment in a non-communicative context, and ii. syntactic priming magnitude (RT) is correlated with the syntactic priming magnitude of the speaker’s communicative partner. Results showed that across conditions, participants were faster to produce sentences with repeated syntax, relative to novel syntax. This behavioral result converged with the fMRI data: we found repetition suppression effects in the left insula extending into left inferior frontal gyrus (BA 47/45), left middle temporal gyrus (BA 21), left inferior parietal cortex (BA 40), left precentral gyrus (BA 6), bilateral precuneus (BA 7), bilateral supplementary motor cortex (BA 32/8) and right insula (BA 47). We did not find support for the first hypothesis: having a communicative intention does not increase the magnitude of syntactic priming effects (either in the brain or in behavior) per se. We did find support for the second hypothesis: if speaker A is strongly/weakly primed by speaker B, then speaker B is primed by speaker A to a similar extent. We conclude that syntactic processing is influenced by being in a communicative context, and that the nature of this influence is bi-directional: speakers are influenced by each other. -
Segaert, K., Weber, K., Cladder-Micus, M., & Hagoort, P. (2014). The influence of verb-bound syntactic preferences on the processing of syntactic structures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1448-1460. doi:10.1037/a0036796.
Abstract
Speakers sometimes repeat syntactic structures across sentences, a phenomenon called syntactic priming. We investigated the influence of verb-bound syntactic preferences on syntactic priming effects in response choices and response latencies for German ditransitive sentences. In the response choices we found inverse preference effects: There were stronger syntactic priming effects for primes in the less preferred structure, given the syntactic preference of the prime verb. In the response latencies we found positive preference effects: There were stronger syntactic priming effects for primes in the more preferred structure, given the syntactic preference of the prime verb. These findings provide further support for the idea that syntactic processing is lexically guided. -
Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.
Abstract
An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.Additional information
http://cercor.oxfordjournals.org/content/early/2012/10/11/cercor.bhs324/suppl/D… -
Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.
Abstract
How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.Additional information
http://www.pnas.org/content/suppl/2014/12/04/1414886111.DCSupplemental -
Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.
Abstract
Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding. -
Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.
Abstract
To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords. -
Van Leeuwen, T. M., Petersson, K. M., Langner, O., Rijpkema, M., & Hagoort, P. (2014). Color specificity in the human V4 complex: An fMRI repetition suppression study. In T. D. Papageorgiou, G. I. Cristopoulous, & S. M. Smirnakis (
Eds. ), Advanced Brain Neuroimaging Topics in Health and Disease - Methods and Applications (pp. 275-295). Rijeka, Croatia: Intech. doi:10.5772/58278. -
Van Leeuwen, T. M., Lamers, M. J. A., Petersson, K. M., Gussenhoven, C., Poser, B., & Hagoort, P. (2014). Phonological markers of information structure: An fMRI study. Neuropsychologia, 58(1), 64-74. doi:10.1016/j.neuropsychologia.2014.03.017.
Abstract
In this fMRI study we investigate the neural correlates of information structure integration during sentence comprehension in Dutch. We looked into how prosodic cues (pitch accents) that signal the information status of constituents to the listener (new information) are combined with other types of information during the unification process. The difficulty of unifying the prosodic cues into overall sentence meaning was manipulated by constructing sentences in which the pitch accent did (focus-accent agreement), and sentences in which the pitch accent did not (focus-accent disagreement) match the expectations for focus constituents of the sentence. In case of a mismatch, the load on unification processes increases. Our results show two anatomically distinct effects of focus-accent disagreement, one located in the posterior left inferior frontal gyrus (LIFG, BA6/44), and one in the more anterior-ventral LIFG (BA 47/45). Our results confirm that information structure is taken into account during unification, and imply an important role for the LIFG in unification processes, in line with previous fMRI studies.Additional information
mmc1.doc -
Acheson, D. J., Ganushchak, L. Y., Christoffels, I. K., & Hagoort, P. (2012). Conflict monitoring in speech production: Physiological evidence from bilingual picture naming. Brain and Language, 123, 131 -136. doi:10.1016/j.bandl.2012.08.008.
Abstract
Self-monitoring in production is critical to correct performance, and recent accounts suggest that such monitoring may occur via the detection of response conflict. The error-related negativity (ERN) is a response-locked event-related potential (ERP) that is sensitive to response conflict. The present study examines whether response conflict is detected in production by exploring a situation where multiple outputs are activated: the bilingual naming of form-related equivalents (i.e. cognates). ERPs were recorded while German-Dutch bilinguals named pictures in their first and second languages. Although cognates were named faster than non-cognates, response conflict was evident in the form of a larger ERN-like response for cognates and adaptation effects on naming, as the magnitude of cognate facilitation was smaller following the naming of cognates. Given that signals of response conflict are present during correct naming, the present results suggest that such conflict may serve as a reliable signal for monitoring in speech production. -
Adank, P., Noordzij, M. L., & Hagoort, P. (2012). The role of planum temporale in processing accent variation in spoken language comprehension. Human Brain Mapping, 33, 360-372. doi:10.1002/hbm.21218.
Abstract
A repetition-suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a novel accent of Dutch. Each speaker produced sentences in both accents. Participants listened to two sentences presented in quick succession while their haemodynamic responses were recorded in an MR scanner. The first sentence was spoken in Standard Dutch; the second was spoken by the same or a different speaker and produced in Standard Dutch or in the artificial accent. This design made it possible to identify neural responses to a switch in speaker and accent independently. A switch in accent was associated with activations in predominantly left-lateralized areas including posterior temporal regions, including superior temporal gyrus, planum temporale (PT), and supramarginal gyrus, as well as in frontal regions, including left pars opercularis of the inferior frontal gyrus (IFG). A switch in speaker recruited a predominantly right-lateralized network, including middle frontal gyrus and prenuneus. It is concluded that posterior temporal areas, including PT, and frontal areas, including IFG, are involved in processing accent variation in spoken sentence comprehension -
Adank, P., Davis, M. H., & Hagoort, P. (2012). Neural dissociation in processing noise and accent in spoken language comprehension. Neuropsychologia, 50, 77-84. doi:10.1016/j.neuropsychologia.2011.10.024.
Abstract
We investigated how two distortions of the speech signal–added background noise and speech in an unfamiliar accent - affect comprehension of speech using functional Magnetic Resonance Imaging (fMRI). Listeners performed a speeded sentence verification task for speech in quiet in Standard Dutch, in Standard Dutch with added background noise, and for speech in an unfamiliar accent of Dutch. The behavioural results showed slower responses for both types of distortion compared to clear speech, and no difference between the two distortions. The neuroimaging results showed that, compared to clear speech, processing noise resulted in more activity bilaterally in Inferior Frontal Gyrus, Frontal Operculum, while processing accented speech recruited an area in left Superior Temporal Gyrus/Sulcus. It is concluded that the neural bases for processing different distortions of the speech signal dissociate. It is suggested that current models of the cortical organisation of speech are updated to specifically associate bilateral inferior frontal areas with processing external distortions (e.g., background noise) and left temporal areas with speaker-related distortions (e.g., accents).Additional information
Adank_2012_Suppl_Info.doc -
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). Language, linguistics and cognition. In R. Kempson, T. Fernando, & N. Asher (
Eds. ), Philosophy of linguistics (pp. 325-356). Amsterdam: North Holland.Abstract
This chapter provides a partial overview of some currently debated issues in the cognitive science of language. We distinguish two families of problems, which we refer to as ‘language and cognition’ and ‘linguistics and cognition’. Under the first heading we present and discuss the hypothesis that language, in particular the semantics of tense and aspect, is grounded in the planning system. We emphasize the role of non-monotonic inference during language comprehension. We look at the converse issue of the role of linguistic interpretation in reasoning tasks. Under the second heading we investigate the two foremost assumptions of current linguistic methodology, namely intuitions as the only adequate empirical basis of theories of meaning and grammar and the competence-performance distinction, arguing that these are among the heaviest burdens for a truly comprehensive approach to language. Marr’s three-level scheme is proposed as an alternative methodological framework, which we apply in a review of two ERP studies on semantic processing, to the ‘binding problem’ for language, and in a conclusive set of remarks on relating theories in the cognitive science of language. -
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). The processing consequences of compositionality. In M. Werning, W. Hinzen, & E. Machery (
Eds. ), The Oxford handbook of compositionality (pp. 655-672). New York: Oxford University Press. -
Fitch, W. T., Friederici, A. D., & Hagoort, P. (
Eds. ). (2012). Pattern perception and computational complexity [Special Issue]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598). -
Fitch, W. T., Friederici, A. D., & Hagoort, P. (2012). Pattern perception and computational complexity: Introduction to the special issue. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598), 1925-1932. doi:10.1098/rstb.2012.0099.
Abstract
Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue. -
Hagoort, P. (2012). From ants to music and language [Preface]. In A. D. Patel, Music, language, and the brain [Chinese translation] (pp. 9-10). Shanghai: East China Normal University Press Ltd.
-
Hagoort, P. (2012). Het muzikale brein. Speling: Tijdschrift voor bezinning. Muziek als bron van bezieling, 64(1), 44-48.
-
Hagoort, P. (2012). Het sprekende brein. MemoRad, 17(1), 27-30.
Abstract
Geen andere soort dan homo sapiens heeft in de loop van zijn evolutionaire geschiedenis een communicatiesysteem ontwikkeld waarin een eindig aantal symbolen samen met een reeks van regels voor het combineren daarvan een oneindig aantal uitdrukkingen mogelijk maakt. Dit natuurlijke taalsysteem stelt leden van onze soort in staat gedachten een uiterlijke vorm te geven en uit te wisselen met de sociale groep en, door de uitvinding van schriftsystemen, met de gehele samenleving. Spraak en taal zijn effectieve middelen voor het behoud van sociale cohesie in samenlevingen waarvan de groepsgrootte en de complexe sociale organisatie van dien aard is dat dit niet langer kan door middel van ‘vlooien’, de wijze waarop onze genetische buren, de primaten van de oude wereld, sociale cohesie bevorderen [1,2]. -
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. In N. Miyake, D. Peebles, & R. P. Cooper (
Eds. ), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 467-472). Austin, TX: Cognitive Society. Retrieved from http://mindmodeling.org/cogsci2012/papers/0092/index.html.Abstract
Co-speech gestures are an integral part of human face-to-face communication, but little is known about how pragmatic factors influence our comprehension of those gestures. The present study investigates how different types of recipients process iconic gestures in a triadic communicative situation. Participants (N = 32) took on the role of one of two recipients in a triad and were presented with 160 video clips of an actor speaking, or speaking and gesturing. Crucially, the actor’s eye gaze was manipulated in that she alternated her gaze between the two recipients. Participants thus perceived some messages in the role of addressed recipient and some in the role of unaddressed recipient. In these roles, participants were asked to make judgements concerning the speaker’s messages. Their reaction times showed that unaddressed recipients did comprehend speaker’s gestures differently to addressees. The findings are discussed with respect to automatic and controlled processes involved in gesture comprehension. -
Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.
Abstract
Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction. -
Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.
Abstract
Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
words in continuous utterances is clearly linked to future language development. -
Kos, M., Van den Brink, D., Snijders, T. M., Rijpkema, M., Franke, B., Fernandez, G., Hagoort, P., & Whitehouse, A. (2012). CNTNAP2 and language processing in healthy individuals as measured with ERPs. PLoS One, 7(10), e46995. doi:10.1371/journal.pone.0046995.
Abstract
The genetic FOXP2-CNTNAP2 pathway has been shown to be involved in the language capacity. We investigated whether a common variant of CNTNAP2 (rs7794745) is relevant for syntactic and semantic processing in the general population by using a visual sentence processing paradigm while recording ERPs in 49 healthy adults. While both AA homozygotes and T-carriers showed a standard N400 effect to semantic anomalies, the response to subject-verb agreement violations differed across genotype groups. T-carriers displayed an anterior negativity preceding the P600 effect, whereas for the AA group only a P600 effect was observed. These results provide another piece of evidence that the neuronal architecture of the human faculty of language is shaped differently by effects that are genetically determined. -
Kos, M., Van den Brink, D., & Hagoort, P. (2012). Individual variation in the late positive complex to semantic anomalies. Frontiers in Psychology, 3, 318. doi:10.3389/fpsyg.2012.00318.
Abstract
It is well-known that, within ERP paradigms of sentence processing, semantically anomalous words elicit N400 effects. Less clear, however, is what happens after the N400. In some cases N400 effects are followed by Late Positive Complexes (LPC), whereas in other cases such effects are lacking. We investigated several factors which could affect the LPC, such as contextual constraint, inter-individual variation and working memory. Seventy-two participants read sentences containing a semantic manipulation (Whipped cream tastes sweet/anxious and creamy). Neither contextual constraint nor working memory correlated with the LPC. Inter-individual variation played a substantial role in the elicitation of the LPC with about half of the participants showing a negative response and the other half showing an LPC. This individual variation correlated with a syntactic ERP as well as an alternative semantic manipulation. In conclusion, our results show that inter-individual variation plays a large role in the elicitation of the LPC and this may account for the diversity in LPC findings in language research. -
Lai, V. T., Hagoort, P., & Casasanto, D. (2012). Affective primacy vs. cognitive primacy: Dissolving the debate. Frontiers in Psychology, 3, 243. doi:10.3389/fpsyg.2012.00243.
Abstract
When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur. -
Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.
Abstract
In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains. -
Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.
Abstract
Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics. -
Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.
Abstract
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. -
Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.
Abstract
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems. -
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Newman-Norlund, R., Hagoort, P., Levinson, S. C., & Toni, I. (2012). Exploring the cognitive infrastructure of communication. In B. Galantucci, & S. Garrod (
Eds. ), Experimental Semiotics: Studies on the emergence and evolution of human communication (pp. 51-78). Amsterdam: Benjamins.Abstract
Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.Files private
Request files -
Segaert, K., Menenti, L., Weber, K., Petersson, K. M., & Hagoort, P. (2012). Shared syntax in language production and language comprehension — An fMRI study. Cerebral Cortex, 22, 1662-1670. doi:10.1093/cercor/bhr249.
Abstract
During speaking and listening syntactic processing is a crucial step. It involves specifying syntactic relations between words in a sentence. If the production and comprehension modality share the neuronal substrate for syntactic processing then processing syntax in one modality should lead to adaptation effects in the other modality. In the present functional magnetic resonance imaging experiment, participants either overtly produced or heard descriptions of pictures. We looked for brain regions showing adaptation effects to the repetition of syntactic structures. In order to ensure that not just the same brain regions but also the same neuronal populations within these regions are involved in syntactic processing in speaking and listening, we compared syntactic adaptation effects within processing modalities (syntactic production-to-production and comprehension-to-comprehension priming) with syntactic adaptation effects between processing modalities (syntactic comprehension-to-production and production-to-comprehension priming). We found syntactic adaptation effects in left inferior frontal gyrus (Brodmann's area [BA] 45), left middle temporal gyrus (BA 21), and bilateral supplementary motor area (BA 6) which were equally strong within and between processing modalities. Thus, syntactic repetition facilitates syntactic processing in the brain within and across processing modalities to the same extent. We conclude that that the same neurobiological system seems to subserve syntactic processing in speaking and listening. -
Stein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., Toro, R., Appel, K., Bartecek, R., Bergmann, Ø., Bernard, M., Brown, A. A., Cannon, D. M., Chakravarty, M. M., Christoforou, A., Domin, M., Grimm, O., Hollinshead, M., Holmes, A. J., Homuth, G. and 184 moreStein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., Toro, R., Appel, K., Bartecek, R., Bergmann, Ø., Bernard, M., Brown, A. A., Cannon, D. M., Chakravarty, M. M., Christoforou, A., Domin, M., Grimm, O., Hollinshead, M., Holmes, A. J., Homuth, G., Hottenga, J.-J., Langan, C., Lopez, L. M., Hansell, N. K., Hwang, K. S., Kim, S., Laje, G., Lee, P. H., Liu, X., Loth, E., Lourdusamy, A., Mattingsdal, M., Mohnke, S., Maniega, S. M., Nho, K., Nugent, A. C., O'Brien, C., Papmeyer, M., Pütz, B., Ramasamy, A., Rasmussen, J., Rijpkema, M., Risacher, S. L., Roddey, J. C., Rose, E. J., Ryten, M., Shen, L., Sprooten, E., Strengman, E., Teumer, A., Trabzuni, D., Turner, J., van Eijk, K., van Erp, T. G. M., van Tol, M.-J., Wittfeld, K., Wolf, C., Woudstra, S., Aleman, A., Alhusaini, S., Almasy, L., Binder, E. B., Brohawn, D. G., Cantor, R. M., Carless, M. A., Corvin, A., Czisch, M., Curran, J. E., Davies, G., de Almeida, M. A. A., Delanty, N., Depondt, C., Duggirala, R., Dyer, T. D., Erk, S., Fagerness, J., Fox, P. T., Freimer, N. B., Gill, M., Göring, H. H. H., Hagler, D. J., Hoehn, D., Holsboer, F., Hoogman, M., Hosten, N., Jahanshad, N., Johnson, M. P., Kasperaviciute, D., Kent, J. W. J., Kochunov, P., Lancaster, J. L., Lawrie, S. M., Liewald, D. C., Mandl, R., Matarin, M., Mattheisen, M., Meisenzahl, E., Melle, I., Moses, E. K., Mühleisen, T. W., Nauck, M., Nöthen, M. M., Olvera, R. L., Pandolfo, M., Pike, G. B., Puls, R., Reinvang, I., Rentería, M. E., Rietschel, M., Roffman, J. L., Royle, N. A., Rujescu, D., Savitz, J., Schnack, H. G., Schnell, K., Seiferth, N., Smith, C., Hernández, M. C. V., Steen, V. M., den Heuvel, M. V., van der Wee, N. J., Haren, N. E. M. V., Veltman, J. A., Völzke, H., Walker, R., Westlye, L. T., Whelan, C. D., Agartz, I., Boomsma, D. I., Cavalleri, G. L., Dale, A. M., Djurovic, S., Drevets, W. C., Hagoort, P., Hall, J., Heinz, A., Clifford, R. J., Foroud, T. M., Le Hellard, S., Macciardi, F., Montgomery, G. W., Poline, J. B., Porteous, D. J., Sisodiya, S. M., Starr, J. M., Sussmann, J., Toga, A. W., Veltman, D. J., Walter, H., Weiner, M. W., EPIGEN Consortium, IMAGENConsortium, Saguenay Youth Study Group, Bis, J. C., Ikram, M. A., Smith, A. V., Gudnason, V., Tzourio, C., Vernooij, M. W., Launer, L. J., DeCarli, C., Seshadri, S., Heart, C. f., Consortium, A. R. i. G. E. (., Andreassen, O. A., Apostolova, L. G., Bastin, M. E., Blangero, J., Brunner, H. G., Buckner, R. L., Cichon, S., Coppola, G., de Zubicaray, G. I., Deary, I. J., Donohoe, G., de Geus, E. J. C., Espeseth, T., Fernández, G., Glahn, D. C., Grabe, H. J., Hardy, J., Hulshoff Pol, H. E., Jenkinson, M., Kahn, R. S., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meyer-Lindenberg, A., Morris, D. W., Müller-Myhsok, B., Nichols, T. E., Ophoff, R. A., Paus, T., Pausova, Z., Penninx, B. W., Sämann, P. G., Saykin, A. J., Schumann, G., Smoller, J. W., Wardlaw, J. M., Weale, M. E., Martin, N. G., Franke, B., Wright, M. J., Thompson, P. M., & the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) Consortium (2012). Identification of common variants associated with human hippocampal and intracranial volumes. Nature Genetics, 44, 552-561. doi:10.1038/ng.2250.
Abstract
Identifying genetic variants influencing human brain structures may reveal new biological mechanisms underlying cognition and neuropsychiatric illness. The volume of the hippocampus is a biomarker of incipient Alzheimer's disease and is reduced in schizophrenia, major depression and mesial temporal lobe epilepsy. Whereas many brain imaging phenotypes are highly heritable, identifying and replicating genetic influences has been difficult, as small effects and the high costs of magnetic resonance imaging (MRI) have led to underpowered studies. Here we report genome-wide association meta-analyses and replication for mean bilateral hippocampal, total brain and intracranial volumes from a large multinational consortium. The intergenic variant rs7294919 was associated with hippocampal volume (12q24.22; N = 21,151; P = 6.70 × 10(-16)) and the expression levels of the positional candidate gene TESC in brain tissue. Additionally, rs10784502, located within HMGA2, was associated with intracranial volume (12q14.3; N = 15,782; P = 1.12 × 10(-12)). We also identified a suggestive association with total brain volume at rs10494373 within DDR2 (1q23.3; N = 6,500; P = 5.81 × 10(-7)).Additional information
Stein_Nature_Genetics_2012_Suppl_Info.pdf -
Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2012). Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model. Cognitive Science, 36, 1078-1101. doi:10.1111/j.1551-6709.2012.01235.x.
Abstract
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing. -
Van den Brink, D., Van Berkum, J. J. A., Bastiaansen, M. C. M., Tesink, C. M. J. Y., Kos, M., Buitelaar, J. K., & Hagoort, P. (2012). Empathy matters: ERP evidence for inter-individual differences in social language processing. Social, Cognitive and Affective Neuroscience, 7, 173-182. doi:10.1093/scan/nsq094.
Abstract
When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences. -
Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.
Abstract
Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed. -
Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.
Abstract
Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access. -
Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.
Abstract
The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.Additional information
Wang_Supporting Information Figure 1.eps Wang_Supporting Information Figure 3.eps -
Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.
Abstract
Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion. -
Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.
Abstract
We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature. -
Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.
Abstract
Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.Additional information
Zhu_2012_suppl.dot
Share this page