Displaying 201 - 300 of 539
-
Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized size-sound sound symbolism. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.
Abstract
Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition. -
Schoot, L., Stolk, A., Hagoort, P., Garrod, S., Segaert, K., & Menenti, L. (2016). Finding your way in the zoo: How situation model alignment affects interpersonal neural coupling. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
INTRODUCTION: We investigated how speaker-listener alignment at the level of the situation model is reflected in inter-subject correlations in temporal and spatial patterns of brain activity, also known as between-brain neural coupling (Stephens et al., 2010). We manipulated the complexity of the situation models that needed to be communicated (simple vs complex situation model) to investigate whether this affects neural coupling between speaker and listener. Furthermore, we investigated whether the degree to which alignment was successful was positively related to the degree of between-brain coupling. METHOD: We measured neural coupling (using fMRI) between speakers describing abstract zoo maps, and listeners interpreting those descriptions. Each speaker described both a ‘simple’ map, a 6x6 grid including five animal locations, and a ‘complex’ map, an 8x8 grid including 7 animal locations, from memory, and with the order of map description randomized across speakers. Audio-recordings of the speakers’ utterances were then replayed to the listeners, who had to reconstruct the zoo maps on the basis of their speakers’ descriptions. On the group level, we used a GLM approach to model between-brain neural coupling as a function of condition (simple vs complex map). Communicative success, i.e. map reproduction accuracy, was added as a covariate. RESULTS: Whole brain analyses revealed a positive relationship between communicative success and the strength of speaker-listener neural coupling in the left inferior parietal cortex. That is, the more successful listeners were in reconstructing the map based on what their partner described, the stronger the correlation between that speaker and listener's BOLD signals in that area. Furthermore, within the left inferior parietal cortex, pairs in the complex situation model condition showed stronger between-brain neural coupling than pairs in the simple situation model condition. DISCUSSION: This is the first two-brain study to explore the effects of complexity of the communicated situation model and the degree of communicative success on (language driven) between-brain neural coupling. Interestingly, our effects were located in the inferior parietal cortex, previously associated with visuospatial imagery. This process likely plays a role in our task in which the communicated situation models had a strong visuospatial component. Given that there was more coupling the more situation models were successfully aligned (i.e. map reproduction accuracy), it was surprising that we found stronger coupling in the complex than the simple situation model condition. We plan in ROI analyses in primary auditory, core language, and discourse processing regions. The present findings open the way for exploring the interaction between situation models and linguistic computations during communication. -
Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Maybe syntactic alignment is not affected by social goals?. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Although it is suggested that linguistic alignment can be influenced by speakers' relationship with their listener, previous studies provide inconsistent results. We tested whether speakers' desire to be liked affects syntactic alignment, and simultaneously assessed whether alignment affects perceived likeability. Primed participants (PPs) were therefore primed by another naive participant (Evaluator). PP and Evaluator took turns describing photographs with active/passive sentences. Unknown to PP, we controlled Evaluator's syntax by having them read out sentences. PPs' desire to be liked was manipulated by assigning pairs to a Control (secret evaluation by Evaluator), Evaluation (PPs were aware of evaluation), or Directed Evaluation (PPs knew about the evaluation and were instructed to make a positive impression) condition. PPs showed significant syntactic alignment (more passives produced after passive primes). However, there was no interaction with condition: PPs did not align more in the (Directed) Evaluation than in the Control condition. Our results thus do not support the conclusion that speakers' desire to be liked affects syntactic alignment. Furthermore, there was no reliable relationship between syntactic alignment and how likeable PPs appeared to their Evaluator: there was a negative effect in the Control and Evaluation conditions, but no relationship in the Directed Evaluation condition. -
Sharoh, D., van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. G. (2016). Investigation of depth-dependent BOLD during language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Neocortex is known to be histologically organized with respect to depth, and neuronal connections across cortical layers form part of the brain's functional organization[1]. Efferent (outgoing) and afferent (incoming) inter-regional connections are found to originate and terminate at different depths, and this structure relates to the internal/external origin of neuronal activity. Specifically, efferent, inter-regional connections are associated with internally directed, top-down activity; afferent inter-regional connections are associated with bottom-up activity originating from external stimulation. The contribution of top-down and bottom-up neuronal activity to the BOLD signal can perhaps be inferred from depth-related fluctuations in BOLD. By dissociating top-down from bottom-up effects in fMRI, investigators could observe the relative contribution of internally and externally generated activity to the BOLD signal, and potentially test hypotheses regarding the directionality of BOLD connectivity. Previous investigation of depth-dependent BOLD has focused on human visual cortex[2]. In the present work, we have designed an experiment to serve as a proof of principle that (1) depth-dependent BOLD can be measured in higher cortical areas during a language processing task, and (2) that differences in the relative contribution of the BOLD signal at discrete depths, to the total BOLD signal, vary as a function of experimental condition. Data were collected on the Siemens 7T scanner at the Hahn Institute in Essen, Germany. Submillimeter (0.8mm3), T1-weighted data were acquired using MP2RAGE, along with near whole-brain, submillimeter (0.9x0.9x0.943mm x112 slices) 3D-EPI task data. The field of view fully covered bilateral temporal and fusiform regions, but excluded superior brain areas on the order of several centimeters. Participants were presented with an event-related paradigm involving the presentation of words, pseudowords and nonwords in visual and auditory modalities. Only the visual modality is discussed here. Cortical segmentation was performed using FreeSurfer's surface-pipeline. We parcellated the gray matter volume into discrete depths, and the analysis of depth-dependent BOLD was performed with the Laminar Analysis Toolbox (van Mourik). Further analysis was performed using FreeSurfer, AFNI and in-house MATLAB code. Regions included in the depth-dependent analysis were determined by first-level analysis. We have presently collected data from 10 participants. 4 were excluded due to equipment malfunction. In the first-level analysis (volume registration, smoothing, GLM, and significance testing), we observe fusiform activation for Realword>Nonword and Pseudoword>Nonword contrasts. These contrasts additionally show activation along middle temporal gyrus. The depth-dependent analysis was performed on fusiform clusters generated during the first-level analysis. These clusters appeared to show depth-dependent signal differences as a function of experimental condition. We suspect these differences may be related to layer-specific activation and reflect the relative contribution of top-down and bottom-up activity in the observed signal. These are preliminary results, and part of an ongoing effort to establish novel, depth-dependent analysis techniques in higher cortical areas and within the language domain. Future analysis will investigate the nature of the depth-dependent differences and the connectivity profiles of depth-dependent variation among distal cortical regions.[1]DouglasR.J.&MartinK.A.C.(2004).Neuronal Circuits of the Neocortex.Annual Review of Neuroscience,27,419-551.[2]Kok,P.,et al.(2016).Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.Current Biology,26,371-376. -
Tan, Y., Acheson, D. J., & Hagoort, P. (2016). Moving beyond single words: Dissociating levels of linguistic representation in short-term memory (STM). Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
This study assessed the role of semantic, phonological, and grammatical levels of representation in short-term list recall through a 2 (meaningfulness) × 2 (phonological similarity) ×2 (grammaticality) manipulation. Dutch subjects (Experiment 1-2) and English subjects (Experiment 3-4) and seven aphasic patients (Experiment 5) were required to recall lists consisting of adjective-noun word-pairs. Within each list, meaningfulness was manipulated by pairing adjectives and nouns in a meaningful or non-meaningful way; phonological similarity was manipulated through the degree of phonological overlap between words; grammaticality was manipulated through the order of the adjective and noun within each word pair in English (e.g., “salty mea”´ vs. “meat salty”) and through morphological agreement in Dutch. Overall, subjects showed better recall for words in the meaningful, phonologically-dissimilar, and grammatical conditions. Moreover, by relating these main effects to subjects' phonological and semantic STM capacity, we found that subjects with better phonological STM were less affected by the meaningfulness manipulation, while subjects with better semantic STM were less affected by the phonological manipulations. These results demonstrated that there are multiple routes to group information in STM via the combinatorial constraints afforded by language, and subjects might benefit from additional cues when memory load is high in certain level(s). -
Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N., Kempen, G., Petersson, K. M., & Hagoort, P. (2016). Dynamics of supramodal unification processes during sentence comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
It is generally assumed that structure building processes in the spoken and written modalities are subserved by modality-independent lexical, morphological, grammatical, and conceptual processes. We present a large-scale neuroimaging study (N=204) on whether the unification of sentence structure is supramodal in this sense, testing if observations replicate across written and spoken sentence materials. The activity in the unification network should increase when it is presented with a challenging sentence structure, irrespective of the input modality. We build on the well-established findings that multiple non-local dependencies, overlapping in time, are challenging and that language users disprefer left- over right-branching sentence structures in written and spoken language, at least in the context of mainly right-branching languages such as English and Dutch. We thus focused our study with Dutch participants on a left-branching processing complexity measure. Supramodal effects of left-branching complexity were observed in a left-lateralized perisylvian network. The left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching processing complexity. The left anterior middle temporal gyrus (LaMTG) and left inferior parietal lobe (LIPL) were also significant, although less specifically. The LaMTG was increasingly active also for sentences with increasing right-branching processing complexity. A direct comparison between left- and right-branching processing complexity yielded activity in an LIFG ROI for left > right-branching complexity, while the right > left contrast showed no activation. Using a linear contrast testing for increases in the left-branching complexity effect over the sentence, we found significant activity in LIFG and LpMTG. In other words, the activity in these regions increased from sentence onset to end, in parallel with the increase of the left-branching complexity measure. No similar increase was observed in LIPL. Thus, the observed functional segregation during sentence processing of LaMTG and LIPL vs. LIFG and LpMTG is consistent with our observation of differential activation changes in sensitivity to left- vs. right-branching structure. While LIFG, LpMTG, LaMTG and LIPL all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to the subprocesses of unification. Our results speak to the high processing costs of (1) simultaneous unification and (2) maintenance of constituents that are not yet attached to the already unified part of the sentence. Sentences with high left- (compared to right-) branching complexity impose an added load on unification. We show that this added load leads to an increased BOLD response in left perisylvian regions. The results are relevant for understanding the neural underpinnings of the processing difficulty linked to multiple, overlapping non-local dependencies. In conclusion, we used the left- and right branching complexity measures to index this processing difficulty and showed that the unification network operates with similar spatiotemporal dynamics over the course of the sentence, during unification of both written and spoken sentences. -
Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
-
Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information. -
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing speech production-perception interactions through individual differences. Talk presented at Psycholinguistics in Flanders. Marche-en-Famenne. 2015-05-21 - 2015-05-22.
Abstract
This study aims to test recent theoretical frameworks in speech motor control which claim that speech production targets are specified in auditory terms. According to such frameworks, people with better auditory acuity should have more precise speech targets. Participants performed speech perception and production tasks in a counterbalanced order. Speech perception acuity was assessed using an adaptive speech discrimination task, where participants discriminated between stimuli on a /ɪ/-/ɛ/ and a /ɑ/-/ɔ/ continuum. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording of the vowels /ɪ/, /ɛ/, /ɑ/ and /ɔ/ in 288 pseudowords (18 per vowel, each of which was repeated 4 times). We predicted that speech production variability would correlate inversely with discrimination performance. Results confirmed this prediction as better discriminators had more distinctive vowel production targets. In addition, participants with higher auditory acuity produced vowels with smaller within-phoneme variability but spaced farther apart in vowel space. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interactions. -
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. Poster presented at International Congress of Phonetic Sciences, Glasgow, UK.
Abstract
This study aims to test a prediction of recent
theoretical frameworks in speech motor control: if
speech production targets are specified in auditory
terms, people with better auditory acuity should
have more precise speech targets.
To investigate this, we had participants perform
speech perception and production tasks in a
counterbalanced order. To assess speech perception
acuity, we used an adaptive speech discrimination
task. To assess variability in speech production,
participants performed a pseudo-word reading task;
formant values were measured for each recording.
We predicted that speech production variability to
correlate inversely with discrimination performance.
The results suggest that people do vary in their
production and perceptual abilities, and that better
discriminators have more distinctive vowel
production targets, confirming our prediction. This
study highlights the importance of individual
differences in the study of speech motor control, and
sheds light on speech production-perception
interaction. -
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Effects of auditory feedback consistency on vowel production. Poster presented at Psycholinguistics in Flanders, Marche-en-Famenne.
Abstract
In investigations of feedback control during speech production, researchers have focused on two different kinds of responses to erroneous or unexpected auditory feedback. Compensation refers to online, feedback-based corrections of articulations. In contrast, adaptation refers to long-term changes in the speech production system after exposure to erroneous/unexpected feedback, which may last even after feedback is normal again. In the current study, we aimed to compare both types of feedback responses by investigating the conditions under which the system starts adapting in addition to merely compensating. Participants vocalized long vowels while they were exposed to either consistently altered auditory feedback, or to feedback that was unpredictably either altered or normal. Participants were not aware of the manipulation of auditory feedback. We predicted that both conditions would elicit compensation, whereas adaptation would be stronger when the altered feedback was consistent across trials. The results show that although there seems to be somewhat more adaptation for the consistently altered feedback condition, a substantial amount of individual variability led to statistically unreliable effects at the group level. The results stress the importance of taking into account individual differences and show that people vary widely in how they respond to altered auditory feedback.Additional information
http://figshare.com/articles/Effects_of_auditory_feedback_consistency_on_vowel_… -
Franken, M. K., Eisner, F., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Following and Opposing Responses to Perturbed Auditory Feedback. Poster presented at Society for the Neurobiology of Language Annual Meeting 2015, Chicago, IL.
-
Hagoort, P. (2015). De Nationale Wetenschapsagenda [Lecture]. Talk presented at the Society of Spinoza Prize Winners. Den Haag, the Netherlands. 2015-04-13.
-
Hagoort, P. (2015). De verbeelding van het brein [TedX presentation]. Talk presented at the Opening of UMC Radboud academic year 2015-2016. Nijmegen, the Netherlands. 2015-08-31.
Additional information
YouTube video -
Hagoort, P. (2015). Cognitive science and the humanities: Linguistics quo vadis?. Talk presented at the SMART Cognitive Science: the Amsterdam Conference. Amsterdam, the Netherlands. 2015-03-25 - 2015-03-26.
-
Hagoort, P. (2015). From Language to communication from an embrained perspective [Keynote lecture]. Talk presented at SMART Cognitive Science: the Amsterdam Conference. Amsterdam. 2015-03-27.
-
Hagoort, P. (2015). Language from an embrained perspective [Plenary lecture]. Talk presented at ENP Days La Cluzaz. La Cluzaz-Annecy. 2015-01-22 - 2015-01-23.
-
Hagoort, P. (2015). Language from an embrained perspective: it is hard to give a good lecture [Keynote lecture]. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos-Cyprus. 2015-09-17 - 2015-09-20.
-
Hagoort, P. (2015). Het talige brein. Talk presented at MPI Open Day. Nijmegen. 2015-06-27.
-
Hagoort, P. (2015). Neurobiology of Language. Talk presented at the LOT Winterschool 2015. Amsterdam, the Netherlands. 2015-01-12 - 2015-01-16.
-
Hagoort, P. (2015). Neurobiology of Language; Peter's 5 principles. Talk presented at the Theme1 meeting of the Donders Institute. Nijmegen, the Netherlands. 2015-06.
-
Hagoort, P. (2015). Vijf kanttekeningen bij het liberalisme vanuit een cognitief-neurowetenschappelijk perspectief [Lecture]. Talk presented at the Telders Stichting. Den Haag, the Netherlands. 2015-07.
-
Hartung, F., Hagoort, P., & Willems, R. M. (2015). Simulation and mental imagery of complex events: Differences and communalities. Poster presented at Seventh Annual Meeting of the Society for the Neurobiology of Language (SNL), Chicago, Illinois, USA.
-
Hartung, F., Hagoort, P., & Willems, R. M. (2015). Simulation versus mental imagery: commonalities and differences. Talk presented at 8th annual Conference on Embodied and Situated Language Processing (ESLP). Lyon, France. 2015-07-29 - 2015-07-30.
-
Heyselaar, E., Segaert, K., Wester, A. J., Kessels, R. P. C., & Hagoort, P. (2015). Syntactic operations rely on implicit memory: Evidence from patients with amnesia. Poster presented at the Individual Differences in Language Processing across the adult Life Span Workshop, Nijmegen, Netherlands.
-
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal gyrus in the integration of pointing gestures and speech. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
-
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The neural integration of pointing gesture and speech in a visual context: An fMRI study. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
Additional information
http://www.neurolang.org/programs/SNL2015_Abstracts.pdf -
Tromp, J., Peeters, D., Hagoort, P., & Meyer, A. S. (2015). Combining EEG and virtual reality: The N400 in a virtual environment. Talk presented at the 4th edition of the Donders Discussions (DD, 2015). Nijmegen, Netherlands. 2015-11-05 - 2015-11-06.
Abstract
A recurring criticism in the field of psycholinguistics and is the lack of ecological validity of experimental designs. For example, many experiments on sentence comprehension are conducted enclosed booths, where sentences are presented word by word on a computer screen. In addition, very often participants are instructed to make judgments that relate directly to the experimental manipulation. Thus, the contexts in which these processes are studied is quite restricted, which calls into question the generalizability of the results to more naturalistic environments. A possible solution to this problem is the use of virtual reality (VR) in psycholinguistic experiments. By immersing participants into a virtual environment, ecological validity can be increased while experimental control is maintained.
In the current experiment we combine electroencephalography (EEG) and VR to look at semantic processing in a more naturalistic setting. During the experiment, participants move through a visually rich virtual restaurant. Tables and avatars are placed in the restaurant and participants are instructed to stop at each table and look at the object (e.g. a plate with a steak) in front of the avatar. Then, the avatar will produce an utterance to accompany the object (e.g. “I think this steak is very nice”), in which the noun will either match (e.g. steak) or mismatch (e.g. mandarin) with the item on the table. Based on previous research, we predict a modulation of the N400, which should be larger in the mismatch than the match condition. Implications of the use of virtual reality for experimental research will be discussed. -
Tromp, J., Hagoort, P., & Meyer, A. S. (2015). Indirect request comprehension requires additional processing effort: A pupillometry study. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
-
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 14th International Pragmatics Conference, Antwerp, Belgium.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valetta, Malta.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2015). A common variant of the CNTNAP2 gene is associated with structural variation in the dorsal visual stream and language-related regions of the right hemisphere. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
-
Udden, J., Hulten, A., Kucera, K. S., Vino, A., Fisher, S. E., & Hagoort, P. (2015). No association of genetic variants of FOXP2 and BOLD response during sentence processing. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
-
Acheson, D. J., Veenstra, A., Meyer, A. S., & Hagoort, P. (2014). EEG pattern classification of semantic and syntactic Influences on subject-verb agreement in production. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
Subject-verb agreement is one of the most common
grammatical encoding operations in language
production. In many languages, morphological
inflection on verbs code for the number of the head
noun of a subject phrase (e.g., The key to the cabinets
is rusty). Despite the relative ease with which subjectverb
agreement is accomplished, people sometimes
make agreement errors (e.g., The key to the cabinets
are rusty). Such errors offer a window into the early
stages of production planning. Agreement errors are
influenced by both syntactic and semantic factors, and
are more likely to occur when a sentence contains either
conceptual or syntactic number mismatches. Little
is known about the timecourse of these influences,
however, and some controversy exists as to whether
they are independent. The current study was designed
to address these two issues using EEG. Semantic and
syntactic factors influencing number mismatch were
factorially-manipulated in a forced-choice sentence
completion paradigm. To avoid EEG artifact associated
with speaking, participants (N=20) were presented with
a noun-phrase, and pressed a button to indicate which
version of the verb ‘to be’ (is/are) should continue
the sentence. Semantic number was manipulated
using preambles that were semantically-integrated or
unintegrated. Semantic integration refers to the semantic
relationship between nouns in a noun-phrase, with
integrated items promoting conceptual-singularity.
The syntactic manipulation was the number (singular/
plural) of the local noun preceding the decision. This
led to preambles such as “The pizza with the yummy
topping(s)... “ (integated) vs. “The pizza with the tasty
bevarage(s)...” (unintegrated). Behavioral results showed
effects of both Local Noun Number and Semantic
Integration, with more errors and longer reaction times
occurring in the mismatching conditions (i.e., plural
local nouns; unintegrated subject phrases). Classic ERP
analyses locked to the local noun (0-700 ms) and to the
time preceding the response (-600 to 0 ms) showed no
systematic differences between conditions. Despite this
result, we assessed whether difference might emerge
using multivariate pattern analysis (MVPA). Using the
same epochs as above, support-vector machines with a
radial basis function were trained on the single-trial level
to classify the difference between Local Noun Number
and Semantic Integration conditions across time and
channels. Results revealed that both conditions could
be reliably classified at the single subject level, and
that classification accuracy was strongest in the epoch
preceding the response. Classification accuracy was
at chance when a classifier trained to dissociate Local
Noun Number was used to predict Semantic Integration
(and vice versa), providing some evidence of the
independence of the two effects. Significant inter-subject
variability was present in the channels and time-points
that were critical for classification, but earlier timepoints
were more often important for classifying Local Noun
Number than Semantic Integration. One result of this
variability is classification performed across subjects was
at chance, which may explain the failure to find standard
ERP effects. This study thus provides an important first
test of semantic and syntactic influences on subject-verb
agreement with EEG, and demonstrates that where
classic ERP analyses fail, MVPA can reliably distinguish
differences at the neurophysiological level. -
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech. -
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
-
Dimitrova, D. V., Snijders, T. M., & Hagoort, P. (2014). Neurobiological attention mechanisms of syntactic and prosodic focusing in spoken language. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
IIn spoken utterances important or new information is
often linguistically marked, for instance by prosody
or syntax. Such highlighting prevents listeners from
skipping over relevant information. Linguistic cues like
pitch accents lead to a more elaborate processing of
important information (Wang et al., 2011). In a recent
fMRI study, Kristensen et al. (2013) have shown that the
neurobiological signature of pitch accents is linked to the
domain-general attention network. This network includes
the superior and inferior parietal cortex. It is an open
question whether non-prosodic markers of focus (i.e. the
important/new information) function similarly on the
neurobiological level, that is by recruiting the domaingeneral
attention network. This study tried to address
this question by testing a syntactic marker of focus. The
present fMRI study investigates the processing of it-clefts,
which highlight important information syntactically,
and compares it to the processing of pitch accents, which
highlight information prosodically. We further test if
both linguistic focusing devices recruit domain-general
attention mechanisms. In the language task, participants
listened to short stories like “In the beginning of February
the final exam period was approaching. The student did
not read the lecture notes”. In the last sentence of each
story, the new information was focused either by a pitch
accent as in “He borrowed the BOOK from the library”
or by an it-cleft like “It was the book that he borrowed
from the library”. Pitch accents were pronounced without
exaggerated acoustic emphasis. Two control conditions
were included: (i) sentences with fronted focus like “The
book he borrowed from the library”, to account for word
order differences between sentences with clefts and
accents, and (ii) sentences without prosodic emphasis
like ”He borrowed the book from the library”. In the
attentional localizer task (adopted from Kristensen et al., 2013), participants listened to tones in a dichotic
listening paradigm. A cue tone was presented in one ear
and participants responded to a target tone presented
either in the same or the other ear. In line with Kristensen
et al. (2013), we found that in the localizer task cue
tones activated the right inferior parietal cortex and the
precuneus, and we found additional activations in the
right superior temporal gyrus. In the language task,
sentences with it- clefts elicited larger activations in the
left and right superior temporal gyrus as compared to
control sentences with fronted focus. For the contrast
between sentences with pitch accent vs. without pitch
accent we observed activation in the inferior parietal
lobe, this activation did however not survive multiple
comparisons correction. In sum, our findings show that
syntactic focusing constructions like it-clefts recruit
the superior temporal gyri, similarly to cue tones in
the localizer task. Highlighting focus by pitch accent
activated the parietal cortex in areas overlapping with
those reported by Kristensen et al. and with those we
found for cue tones in the localizer task. Our study
provides novel evidence that prosodic and syntactic
focusing devices likely have a distinct neurobiological
signature in spoken language comprehension.Additional information
http://www.neurolang.org/programs/SNL2014_Program_with_Abstracts.pdf -
Fitz, H., Hagoort, P., & Petersson, K. M. (2014). A spiking recurrent neural network for semantic processing. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Sentence processing requires the ability to establish thematic relations between constituents. Here we investigate the computational basis of this ability in a neurobiologically motivated comprehension model. The model has a tripartite architecture where input representations are supplied by the mental lexicon to a network that performs incremental thematic role assignment. Roles are combined into a representation of sentence-level meaning by a downstream system (semantic unification). Recurrent, sparsely connected, spiking networks were used which project a time-varying input signal (word sequences) into a high-dimensional, spatio-temporal pattern of activations. Local, adaptive linear read-out units were then calibrated to map the internal dynamics to desired output (thematic role sequences) [1]. Read-outs were adjusted on network dynamics driven by input sequences drawn from argument-structure templates with small variation in function words and larger variation in content words. Models were trained on sequences of 10K words for 200ms per word at a 1ms resolution, and tested on novel items generated from the language. We found that a static, random recurrent spiking network outperformed models that used only local word information without context. To improve performance, we explored various ways of increasing the model’s processing memory (e.g., network size, time constants, sparseness, input strength, etc.) and employed spiking neurons with more dynamic variables (leaky integrate-and-fire versus Izhikevich-neurons). The largest gain was observed when the model’s input history was extended to include previous words and/or roles. Model behavior was also compared for localist and distributed encodings of word sequences. The latter were obtained by compressing lexical co-occurrence statistics into continuous-valued vectors [2]. We found that performance for localist-input was superior even though distributed representations contained extra information about word context and semantic similarity. Finally, we compared models that received input enriched with combinations of semantic features, word-category, and verb sub-categorization labels. Counter-intuitively, we found that adding this information to the model’s lexical input did not further improve performance. Consistent with previous results, however, performance improved for increased variability in content words [3]. This indicates that the approach to comprehension taken here might scale to more diverse and naturalistic language input. Overall, the results suggest that active processing memory beyond pure state-dependent effects is important for sentence interpretation, and that memory in neurobiological systems might be actively computing [4]. Future work therefore needs to address how the structure of word representations interacts with enhanced processing memory in adaptive spiking networks. [1] Maass W., Natschläger T., & Markram H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: 2531-2560. [2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word represen-tations in vector space. Proceedings of the International Conference on Learning Represen-tations, Scottsdale/AZ. [3] Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin/TX. [4] Petersson, K.M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets. Philo-sophical Transactions of the Royal Society B 367: 1971-1883. -
Folia, V., Hagoort, P., & Petersson, K. M. (2014). An FMRI study of the interaction between sentence-level syntax and semantics during language comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Hagoort [1] suggested that the posterior temporal cortex is involved in the retrieval of lexical frames that form building blocks for syntactic unification, supported by the inferior frontal gyrus (IFG). FMRI results support the role of the IFG in the unification operations that are performed at the structural/syntactic [2] and conceptual/ semantic levels [3]. While these studies tackle the unification operations within linguistic components, in the present event-related FMRI study we investigated the interplay between sentence-level semantics and syntax by adapting an EEG comprehension paradigm [4]. The ERP results showed typical P600 and N400 effects, while their combined effect revealed an interaction expressed in the N400 component ([CB-SE] - [SY-CR] > 0). Although the N400 component was similar in the correct and syntactic conditions (SY CR), the combined effect was significantly larger than the effect of semantic anomaly alone. In contrast, the size of the P600 effect was not affected by an additional semantic violation, suggesting an asymmetry between semantic and syntactic processing. In the current FMRI study we characterize this asymmetry by means of a 2x2 experimental design included the conditions: correct (CR), syntactic (SY), semantic (SE), and combined (CB) anomalies. Standard SPM procedures were used for analysis and only clusters significant at P <.05 family-wise error corrected are reported. The main effect of semantic anomaly ([CB+SE] > [SY+CR]) yielded activation in the anterior IFG (BA 45/47). The opposite contrast revealed the theory-ofmind and default-mode network. The main effect of syntactically correct sentences ([SE+CR] > [CB+SY]), showed significant activation in the IFG (BA 44/45), including the mid-anterior insula extending into the superior temporal poles (BA 22/38). In addition, significant effects were observed in medial prefrontal/ anterior cingulate cortex, posterior middle and superior temporal regions (BA 21/22), and the basal ganglia. The reverse contrast yielded activations in the MFG (BA 9/46), the inferior parietal region (BA 39/40), precuneus and the posterior cingulate region. The only region that showed a significant interaction ([CBSE] [SYCR] > 0) was the left temporo-parietal region (BA 22/39/40). In summary, the results show that the IFG is involved in unification during comprehension. The effect of semantic anomaly and its implied unification load engages the anterior IFG while the effect of syntactic anomaly and its implied unification failure engages MFG. Finally, the results suggest that the syntax of gender agreement interacts with sentence-level semantics in the left temporo-parietal region. [1] Hagoort, P. (2005). On Broca, brain, and binding: A new framework. TICS, 9, 416-423. [2] Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/ cercor/bhn187. [3] Hagoort, P., Hald, L., Baastiansen, M., Petersson, K.M. (2004). Integration of word meaning and world knowledge in language comprehension. Science 304, 438-441. [4] Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15, 883- 899. -
Fonteijn, H. M., Acheson, D. J., Petersson, K. M., Segaert, K., Snijders, T. M., Udden, J., Willems, R. M., & Hagoort, P. (2014). Overlap and segregation in activation for syntax and semantics: a meta-analysis of 13 fMRI studies. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2014). Assessing the link between speech perception and production through individual differences. Poster presented at the 6th Annual Meeting of the Society for the Neurobiology of Language, Amsterdam.
Abstract
This study aims to test a prediction of recent
theoretical frameworks in speech motor control: if
speech production targets are specified in auditory
terms, people with better auditory acuity should
have more precise speech targets.
To investigate this, we had participants perform
speech perception and production tasks in a
counterbalanced order. To assess speech perception
acuity, we used an adaptive speech discrimination
task. To assess variability in speech production,
participants performed a pseudo-word reading task;
formant values were measured for each recording.
We predicted that speech production variability to
correlate inversely with discrimination performance.
The results suggest that people do vary in their
production and perceptual abilities, and that better
discriminators have more distinctive vowel
production targets, confirming our prediction. This
study highlights the importance of individual
differences in the study of speech motor control, and
sheds light on speech production-perception
interaction. -
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation. Talk presented at the Donders Discussions 2014. Nijmegen, Netherlands. 2014-10-30 - 2014-10-31.
Abstract
Speech production is one of the most complex motor skills, and involves close interaction
between the perceptual and the motor system. Recently, prediction via forward models has
been at the forefront of speech neuroscience research. For example, neuroimaging evidence
has demonstrated that activation of the auditory cortex is suppressed to self-produced speech
relative to listening without speaking. This finding has been explained via a forward model
that predicts the auditory consequences of our own speech actions. An accurate prediction
cancels out (part of) the auditory cortical activation.
The present study was designed to test two critical predictions from these frameworks: First,
whether the cortical auditory response during speech production varies as a function of the
acoustic distance between feedback and prediction, and second, whether this in turn is predictive
of the amount of adaptation in people’s speech production. MEG was recorded while
subjects performed an online speech imitation task. Each subject heard and imitated Dutch
vowels, varying in their distance from the original vowel in both F1 and F2.
The results did not show clear evidence that the amount of suppression scaled with the
distance between participants’ speech and the speech target. However, we found that subjects’
auditory response did correlate with imitation performance. This result supports the
view that an enhanced auditory response may act as an error signal, driving subsequent
speech adaptation. This suggests that individual differences in SIS could act as a marker for
subsequent adaptation. -
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Franken, M. K., Hagoort, P., & Acheson, D. J. (2014). Prediction, feedback and adaptation in speech imitation: An MEG investigation. Poster presented at the International Workshop on Language Production, Université de Genève, Geneva, Switzerland.
-
Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Grabe, H., Fisher, S. E., & Francks, C. (2014). Asymmetry within and around the planum temporale is sexually dimorphic and influenced by genes involved in steroid biology. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hagoort, P., & Indefrey, P. (2014). A meta-analysis on syntactic vs. semantic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hagoort, P. (2014). De magie van het talige brein. Talk presented at the Paradiso lectures series "Science of Fiction - zin en onzin van de wetenschap in films". Amsterdam, The Netherlands. 2014-02-16.
-
Hagoort, P. (2014). From intonation to information in brain space [Keynote lecture]. Talk presented at The 6th international Conference on Tone and Intonation in Europe 2014. Utrecht. 2014-09-10.
-
Hagoort, P. (2014). Het politieke brein. Talk presented at a Nationaal Initiatief Hersenen en Cognitie (NIHC) publiekslezing. Den Haag, The Netherlands. 2014-03-11.
-
Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at the meeting of the Experimental Psychology Society. London. 2014-01-10.
-
Hagoort, P. (2014). The neurobiology of language beyond single words. Talk presented at CNBC Colloquium. Pittsburgh (PA-USA). 2014-05-08 - 2014-05-08.
Abstract
The classical Wernicke-Lichtheim-Geschwind model of the neurobiology of language was based on an analysis of single word perception and production. However, language processing involves a lot more than production and comprehension of single words. In this talk I will focus on the neurobiological infrastructure for processing language beyond single words. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. I will focus on operations that unify the lexical building blocks into larger structures. MEG, fMRI, resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will discuss to what extent these operations are shared between language comprehension and production. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. In short, I will sketch a picture of language processing from an embrained perspective.
-
Hagoort, P. (2014). The Neurobiology of language: Beyond the sentence given. Talk presented at the Leuven research Institute for Neuroscience & Disease (LIND). Leuven (Belgium). 2014-02-06.
-
Hagoort, P. (2014). Vijf kanttekeningen bij het liberalisme vanuit een cognitief-neurowetenschappelijk perspectief. Talk presented at the Prof. mr. B.M. Teldersstichting (Telders Foundation). Leusden. 2014-01-09.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Getting under your Skin: The role of perspective in narrative comprehension. Talk presented at Cognitive Futures in the Humanities, 2nd International Conference, 24-26 April 2014. University of Durham. 2014-04-24 - 2014-04-26.
Abstract
When we read literature, we often become immersed and dive into fictional worlds. The way we perceive those worlds is the result of skillful arrangement of linguistic features by story writers in order to create certain mental representations in the reader. Narrative perspective, or focalization, is an important tool for storywriters to manipulate reader's perception of a story. Despite the fact that narrative perspective is generally considered a fundamental element in narrative comprehension, the cognitive effects on story reading remain unclear. In previous research, various methodologies were employed to investigate the cognitive processes underlying narrative comprehension. However, studies used either self-report procedures or behavioral tests to investigate reader's reactions and refrained from combined methodologies. In the present study we combined skin conductance measurements and questionnaires while participants read short stories in 1st and 3rd person perspective. The results show that immersion, imagery and appreciation are higher when participants read stories in 1st person perspective. To our surprise, we found higher arousal for reading 3rd person perspective compared to 1st person perspective narratives. We find evidence, that individual difference in arousal between the two conditions is related to how much readers empathize with the fictional characters. The combination of methodologies allows us a more differentiated understanding of the underlying mechanisms of immersion. In my talk, I want to highlight how we can gain more from interdisciplinary research and combinations from various methodologies to investigate cognitive processes underlying narrative comprehension under natural conditions. -
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Narrative perspective influences immersion in fiction reading: Evidence from skin conductance response. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, 2014, Amsterdam, Amsterdan, NL.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). Personal pronouns influence arousal during story comprehension. Embodiment and the reading experience. Poster presented at the Embodied and Situated Language Processing Conference 2014, Rotterdam.
-
Hartung, F., Hagoort, P., & Willems, R. M. (2014). Perspective taking and mental simulation in narrative comprehension [Invited talk]. Talk presented at the Max Planck Institute for Empirical Aesthetics, Language and Literature Department. Frankfurt am Main. 2014-06-23.
-
Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2014). The embodied reader: The effect of narrative perspective on literature understanding and appreciation. Talk presented at 14th Conference of the International Society for the Empirical Study of Literature and Media. Turin, Italy. 2014-07-21 - 2014-07-25.
-
Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Quebec City, Canada. 2014-07-24 - 2014-07-26.
-
Heyselaar, E., Hagoort, P., & Segaert, K. (2014). Virtual agents as a valid replacement for human partners in sentence processing research. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Hultén, A., Schoffelen, J.-M., Udden, J., Lam, N., & Hagoort, P. (2014). Effects of sentence progression in event-related and rhythmic neural activity measured with MEG. Talk presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014). Amsterdam. 2014-08-27 - 2014-08-29.
-
Kunert, R., Willems, R. M., Casasanto, D., Patel, A., & Hagoort, P. (2014). Music and language syntax interact in Broca’s area: An fMRI study. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Lam, N., Schoffelen, J.-M., Hultén, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Lam, N. H. L., Schoffelen, J.-M., Hulten, A., & Hagoort, P. (2014). MEG-derived neural oscillatory activity differentiates sentence processing from word list processing in theta, beta, and gamma frequency bands across time and space. Poster presented at BIOMAG 2014, Halifax, Canada.
-
Lam, N. H. L., Hulten, A., Udden, J., Schoffelen, J.-M., & Hagoort, P. (2013). Sentence processing reflected in oscillatory and event-related brain activity. Poster presented at the Fifth Annual Meeting of the Society for the Neurobiology of Language (SNL 2013), San Diego, CA, USA.
-
Levy, J., Hagoort, P., & Demonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipito-temporal junction. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Morphology is the aspect of language concerned with
the internal structure of words. In the past decades,
a large body of masked priming (behavioral and
neuroimaging) data has suggested that the visual
word recognition system automatically decomposes
any morphologically complex word into a stem and
its constituent morphemes. Yet, it remains equivocal
whether this morphemic decomposition relies primarily
on orthography or on semantics. Here, we approached
the issue straightforwardly by applying a task of
morphological unification, that is, by assembling internal
(morphemic) units into a whole-word. Morphemic units
were sequentially presented while participants were
requested to judge whether their assemblage represented
real- or pseudo-words. Trials representing real words
were divided into words with a transparent (true) or a
non-transparent (pseudo) morphological relationship.
Morphological unification of truly suffixed words
occurred in a more straightforward way (shorter RT and
higher accuracy). Additionally, oscillatory brain activity
was monitored with magnetoencephalography and
revealed that real, compared to pseudo morphological unification enhanced narrow gamma band oscillations
(60-85 Hz, 300-450 ms) in the left posterior occipitotemporal
junction, which is known as a cerebral hub for
visual word processing. This neural signature could not
be explained by a mere automatic lexical processing (i.e.
stem perception), but more likely it related to a semantic
access step during the morphological unification process.
These findings highlight a plausible retrieval of lexical
semantic associations for enabling true morphological
unification, and further instantiate the pivotal role of
the left occipito-temporal junction in visual word form
processing. -
Lockwood, G., Tuomainen, J., & Hagoort, P. (2014). Talking sense: Multisensory integration of Japanese ideophones is reflected in the P2. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
-
Petersson, K. M., Folia, S. S. V., Sousa, A.-C., & Hagoort, P. (2014). Implicit structured sequence learning: An EEG study of the structural mere-exposure effect. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2014). Emotional context modulates embodied metaphor comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming in conversation: I am primed by you if you are primed by me. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
-
Schoot, L., Hagoort, P., & Segaert, K. (2014). Bidirectional syntactic priming: How much your conversation partner is primed by you determines how much you are primed by her. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
In conversation, speakers mimic each other’s (linguistic)
behavior. For example, speakers are likely to repeat
each other’s sentence structures: a phenomenon
known as syntactic priming. In a previous fMRI study
(Schoot et al., 2014) we reported that the magnitude
of priming effects is also mimicked between speakers.
Here, we follow-up on that result. Specifically, we test
the hypothesis that in a communicative context, the
priming magnitude of your interlocutor can predict your
own priming magnitude because you have adapted
your individual susceptibility to priming to the other
speaker. 40 participants were divided into 20 pairs who
performed the experiment together. They were asked
to describe photographs to each other. Photographs
depicted two persons performing a transitive action
(e.g. a man hugging a woman). Participants were
instructed to describe the photographs with an active
or a passive sentence depending on the color-coding
of the photograph (stop light paradigm, Menenti et al.,
2011). Syntactic priming effects were measured in speech
onset latencies: a priming effect is found when speakers
are faster to produce sentences with the same structure
as the preceding sentence (i.e. two consecutive actives
or passives) than to produce sentences with a different
structure (active follows passive or vice versa). Before
participants performed the communicative task, we ran
a non-communicative pretest for each participant, to
measure their individual priming effect without influence
of the partner’s priming effect. To test whether speakers
influence each other’s syntactic priming magnitude in
conversation, we ran an rANCOVA with the syntactic
priming effect of each participant’s communicative
partner as a covariate. Results showed that there was
an interaction between this covariate and Syntactic
Repetition (F(1,38) = 435.93, p < 0.001). The more your
partner is primed by you, the more you are primed by
your partner. In a second analysis, we found that the
difference between paired speakers’ individual syntactic
priming effects (as measured in the pretest) predicted
how much speakers adapt their syntactic priming effects
when they are communicating with their partner in the
communicative experiment (ß = -0.467, p < 0.001). That
means that if your partner’s individual susceptibility for
syntactic priming is stronger than yours, you will increase
your own priming magnitude in the communicative
context. On the other hand, if your partner’s individual
susceptibility for syntactic priming is less strong, you
will decrease your priming effect. Furthermore, the
strength of the in-/decrease is proportional to how
different you are from your speaker to begin with. We
interpret the results as follows. Syntactic priming effects
in conversation are said to result from speakers aligning
their syntactic representations by mimicking sentence
structure (Pickering & Garrod, 2004; Jaeger & Snider,
2013). Here we show that on top of that, the magnitude
of syntactic priming effects is also mimicked between
interlocutors. Future research should focus on further
investigation of the neural correlates of this process, for
example with fMRI hyper-scanning. Indeed, our findings
stress the importance of studying language processing in
real, communicative contexts, which is now also possible
in neuroimaging paradigms. -
Segaert, K., Mazaheri, A., Scheeringa, R., & Hagoort, P. (2014). Oscillatory dynamics of syntactic unification. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Segaert, K., & Hagoort, P. (2014). Syntactic priming: A lexical boost, cumulativity, an inverse preference effect and.. A positive preference effect. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, Scotland.
-
Simanova, I., Hagoort, P., Oostenveld, R., & van Gerven, M. (2014). Surface-based searchlight mapping of modality-independent responses to semantic categories using high-resolution fMRI. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Previous studies have shown the possibility to decode
the semantic category of an object from the fMRI signal in
different modalities of object presentation. Furthermore,
by generalizing a classifier across different modalities
(for instance, from pictures to written words), cortical
structures that process semantic information in an
amodal fashion have been identified. In this study we
employ high-resolution fMRI in combination with
surface-based searchlight mapping to further explore
the architecture of modality-independent responses.
Stimuli of 2 semantic categories (animals and tools)
were presented in 2 modalities: photographs and
written words. Stimuli were presented in 40-seconds
blocks with 10-seconds intervals. Subjects (N=3) were
instructed to judge whether each stimulus within a
block was semantically consistent with the others. The
experiment also included 8 free recall blocks, in which
name of a category appeared on the screen for 2 seconds,
followed by 40 seconds of a blank screen. In theses blocks
subjects were instructed to covertly recall all entities
from the probed category that they had seen during the
experiment. Subjects were scanned with 7 Tesla MRIscanner,
using 3D EPI sequence with isotropic resolution
of 1.5 mm. In each subject, reconstruction of cortical
surface was performed. After that, for each vertex on the
surface, a set of adjacent voxels in the functional volume
was assigned. Subsequently, a linear support vector
machine classifier was used to decode object category in
each surface-based patch. Generalization analysis across
picture and written word presentation was performed,
where the classifier was trained on the fMRI data from
blocks of written words, and tested on the data from picture blocks, and vice versa. The second analysis was
performed on the free recall blocks, where the classifier
was trained on merged data from pictures and written
words blocks, and tested on the free recall blocks.
Further, we explored how the decoding accuracy in the
inferior temporal cortex changes with the diameter of the
searchlight patch. Since surface-based voxel grouping
takes into account the cortical folding and ensures that
voxels belonging to different gyri do not fall in the same
searchlight group, it allows answering the question,
at what spatial scale is the modality-independent
information is represented. The cross-modal analysis in
all three subjects revealed a cluster of voxels in inferior
temporal cortex (lateral fusiform and inferotemporal gyri)
and posterior middle temporal gyrus. The topography
of significant clusters also suggested involvement of
the inferior frontal gyrus, lateral prefrontal cortex, and
medial prefrontal cortex. Interestingly, these areas were
the most evident in the free recall test, although the
searchlight maps of the three subjects showed substantial
individual differences in this analysis. Overall, the data
yield a similar picture as previous research, highlighting
the role of IT/pMTG and prefrontal cortex in the crossmodal
semantic representation. We further extended
previous research, by showing that the classification
accuracy in these areas decreases with the increase of
the searchlight patch size. These results indicate that the
modality-independent categorical activations in the IT
cortex are represented on the spatial scale of millimetres. -
Stolk, A., Noordzij, M., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). How minds meet: Cerebral coherence between communicators marks the emergence of meaning. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Does language production use response conflict monitoring?. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Although monitoring and subsequent control have
received quite some attention for cognitive systems
other than language, few studies have probed the neural
mechanisms underlying monitoring and control in overt
speech production. Recently, it has been hypothesized
that conflict signals within the language production
system might serve as cues to increase monitoring
and control (Nozari, Dell & Schwartz, 2011; Cognitive
Psychology). This hypothesis was linked directly to the
conflict monitoring hypothesis in non-linguistic action
control, which hypothesizes that one of the critical
cues to self-monitoring is the co-activation of multiple
response candidates (Yeung, Botvinick & Cohen, 2004;
Psychological Review). A region of the medial prefrontal
cortex (mPFC), the dorsal anterior cingulate cortex
(dACC), as well as the basal ganglia have consistently
been observed in both errors of commission and high
conflict.. Hence these regions serve as an important
testing ground for whether comparable monitoring
mechanisms are at play in language production. The
current study tests whether these regions are also
implicated in response to speech errors and high conflict
situations that precede the response. 32 native Dutch
subjects performed a tongue twister task and a factorial
combination of the Simon and Flanker task. In the tongue
twister task, participants overtly produced a string of
4 nonwords 3 times. In tongue twister trials (TT), the
onset phonemes followed a pattern of A-B-B-A, whereas
rhymes followed an A-B-A-B pattern (e.g. wep ruust
rep wuust). In non-tongue twister trials (nonTT), the
nonwords contained minimal phonological overlap
(e.g. jots brauk woelp zieg). These two conditions
correspond to a high conflict and a low conflict condition
respectively. In an arrow version of the the Simon-
Flanker task, subjects responded to the direction of a
middle arrow while flanking arrows faced in the same
(i.e., congruent; >>>>>) or different (i.e., incongruent;
>><>>) directions. These stimuli were presented either
on the right side or the left side of the screen, potentially
creating a spatial incongruency with their response
as well. Behavioral results demonstrated sensitivity
to conflict in both tasks, as subjects generated more
speech errors in tongue twister trials than non-tongue
twister trials, and were slower to incongruent relative
to congruent flanker trials. No difference between
spatial incongruency was observed. Neuroimaging
results showed that activation in the ACC significantly
increased in response to the high conflict flanker trials.
In addition, regions of interest analyses in the basal
ganglia showed a significant difference between correct
high and low conflict flanker trials in the left putamen
and right caudate nucleus. For the tongue twister task,
a large region in the mPFC - overlapping with the ACC
region from the flanker task - was significantly more
active in response to errors than correct trials. Significant
differences were also found in the left and right caudate
nuclei and left putamen. No differences were found
between correct TT and nonTT trials. The study therefore
provides evidence for overlap in monitoring between
language production and non-linguistic action at the
time of response (i.e. errors), but little evidence for preresponse
conflict engaging the same system. -
Ten Velden, J., Acheson, D. J., & Hagoort, P. (2014). Are there shared mechanisms of response conflict monitoring in speech production and choice reaction tasks?. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Udden, J., Hulten, A., Fonteijn, H. M., Petersson, K. M., & Hagoort, P. (2014). The middle temporal and inferior parietal cortex contributions to inferior frontal unification across complex sentences. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2014). Taking the listener into account: Computing common ground requires mentalising. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
In order to communicate efficiently, speakers have to
take into account which information they share with their
addressee (common ground) and which information
they do not share (privileged ground). Two views
have emerged about how and when common ground
influences language production. In one view, speakers
take common ground into account early on during
utterance planning (e.g., Brennan & Hanna, 2009).
Alternatively, it has been proposed that speakers’ initial
utterance plans are egocentric, but that they monitor
their plans and revise them if needed (Horton & Keysar,
1996). In an fMRI study, we investigated which neural
mechanisms support speakers’ ability to take into account
common ground, and at what stage during speech
planning these mechanisms come into play. We tested
22 pairs of native Dutch speakers (20 pairs retained in
the analysis), who were assigned to the roles of speaker
or listener for the course of the experiment. The speaker
performed the experiment in the MRI scanner, while the
listener sat behind a computer in the MRI control room.
The speaker performed a communicative and a noncommunicative
task in the scanner. The communicative
task was a referential communication game in which
the speaker described objects in an array to the listener.
The listener could hear the speaker’s descriptions over
headphones and tried to select the intended object on
her screen using a mouse. We manipulated common
ground within the communicative task. In the privileged
ground condition, the speaker saw additional competitor
objects that were occluded from the listener’s point of
view. In order to communicate efficiently, the speaker
had to ignore the occluded competitor objects. In the
control conditions, all relevant objects were in common
ground. The non-communicative task was identical to
the communicative task, except that the speaker was
instructed to describe the objects without the listener
listening. When comparing the BOLD response during
speech planning in the communicative and the noncommunicative
tasks, we found activations in the right
medial prefrontal cortex and bilateral insula, brain areas
involved in mentalizing and empathy. These results
confirm previous neuroimaging research that found that
speaking in a communicative context as compared to a
non-communicative context activates brain areas that
are involved in mentalizing (Sassa et al., 2007; Willems
et al., 2010). We also contrasted brain activity in the
privileged ground and control conditions within the
communicative task to tap into the neural mechanisms
that allow speakers to take common ground into account.
We again found activity in brain regions involved in
mentalizing and visual perspective-taking (the bilateral
temporo-parietal junction and medial prefrontal cortex).
In addition, we found a cluster in the dorsolateral
prefrontal cortex, a brain area that has previously been
proposed to support the inhibition of task-irrelevant
perspectives (Ramsey et al., 2013). Interestingly, these
clusters are located outside the traditional language
network. Our results suggest that speakers engage in
mentalizing and visual perspective-taking during speech
planning in order to compute common ground rather
than monitoring and adjusting their initial egocentric
utterance plans. -
Willems, R. M., Frank, S., Nijhof, A., Hagoort, P., & van den Bosch, A. (2014). Prediction influences brain areas early in the neural language network. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Zaadnoordijk, L., Udden, J., Hulten, A., Hagoort, P., & Fonteijn, H. M. (2014). Between-subject variance in resting-state fMRI connectivity predicts fMRI activation in a language task. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
-
Acheson, D. J., & Hagoort, P. (2013). Response conflict as a mechanism for monitoring in speech production. Poster presented at the 20th Annual Meeting of the Cognitive Neuroscience Society (CNS 2013), San Francisco, CA.
Abstract
Recent work suggests that monitoring in speech production may occur via domain-general mechanisms responsible for detecting response conflict. To test this hypothesis, we measured EEG as people engaged in both non-verbal (flanker) and verbal (tongue twisters) tasks designed to elicit response conflict and errors. In the flanker task, people pressed a button corresponding to whether a center arrow was facing left or right, and response conflict was induced with flanking arrows pointing in the same (congruent; >>>>>) or a different (incongruent >><>>) direction. In the tongue twister task, people read sequences of four nonwords three times in which rhymes alternated in an ABAB pattern while onset speech sounds alternated in an ABBA (tongue twister) or an ABAB (non-tongue twister) pattern (e.g., tif deev dif teev vs. tif teev dif deef). Results in the fl anker task showed standard markers of response conflict in the form of an increased N2 for incongruent relative to congruent trials as well as an error-related negativity (ERN) for incorrect trials. Behaviourally, more errors were elicited for tongue twisters relative to nontongue twister trials, and an ERN was observed on incorrect responses. Correlations between the magnitude of the N2 and ERN in the fl anker task with the magnitude of the ERN and error rates in the tongue twister task are consistent with a common underlying locus. Adaptation effects preceding and following erroneous trials in production are also presented. These results are consistent with response confl ict serving as a cue to monitoring in speech production. -
Acheson, D. J., & Hagoort, P. (2013). What happens before (and after) the tongue twists. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
-
Asaridou, S. S., Dediu, D., Takashima, A., Hagoort, P., & McQueen, J. M. (2013). Learning Dutchinese: Functional, structural, and genetic correlates performance. Poster presented at the 3rd Latin American School for Education, Cognitive and Neural Sciences, Ilha de Comandatuba, Brazil.
-
Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Hoogman, M., Arias-Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H., Van Bokhoven, H., Franke, B., Fisher, S. E., Francks, C., & Hagoort, P. (2013). Genome-wide search shows association between 10p15.2 and volume of left Heschl's Gyrus. Poster presented at the 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, WA, USA.
-
Fonteijn, H. M., Willems, R. M., Acheson, D. J., & Hagoort, P. (2013). Subject-specific parcellations of the inferior frontal cortex. Poster presented at the 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, WA, USA.
-
Franken, M. K., Acheson, D. J., & Hagoort, P. (2013). Modulation of speaking-induced suppression in speech imitation. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
-
Franken, M. K., Acheson, D. J., & Hagoort, P. (2013). Modulations of speaking-induced suppresion in speech imitation. Poster presented at the 5th Annual Meeting of the Society for Neurobiology of Language (SNL 2013), San Diego, CA, USA.
-
Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Arias Vasquez, A., Hoogman, M., Hagoort, P., Van Bokhoven, H., Fernandez, G., Buitelaar, J., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2013). Genome-wide association scanning for asymmetry of the human planum temporale. Poster presented at Donders Institute Evaluation, Nijmegen, The Netherlands.
-
Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Arias Vasquez, A., Hoogman, M., Hagoort, P., Van Bokhoven, H., Fernandez, G., Buitelaar, J., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2013). Genome-wide association scanning for asymmetry of the human planum temporale. Poster presented at the European Society of Human Genetics Conference 2013 (ESHG 2013), Paris, France.
-
Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Arias Vasquez, A., Hoogman, M., Hagoort, P., Van Bokhoven, H., Fernandez, G., Buitelaar, J., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2013). Genome-wide association scanning for asymmetry of the human planum temporale. Talk presented at the Cognomics Symposium 2013. Nijmegen, The Netherlands. 2013-09-10 - 2013-09-11.
-
Guadalupe, T., Zwiers, M. P., Arias Vasquez, A., Hoogman, M., Hagoort, P., Brunner, H., Van Bokhoven, H., Fernandez, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2013). Measurement and genetics of subcortical asymmetries. Poster presented at 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, WA.
-
Hagoort, P. (2013). Beyond the Language Given: Language Processing from an Embrained Perspective [Invited lecture]. Talk presented at MIT Brain and Language talk series. Cambridge, MA. 2013-03-19.
Abstract
A central and influential idea among researchers of language is that our language faculty is organized according to the principle of strict compositionality, which implies that the meaning of an utterances is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. The implication of this idea is that beyond word recognition, language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step the sentence meaning is integrated with information from prior discourse, with world knowledge, with information about the speaker, and with semantic information from extralinguistic domains such as co-speech gestures or the visual world. FMRI results and results from recordings of event related brain potentials will be presented that are inconsistent with this classical model of language intepretation. Our data support a model in which knowledge about the context and the world, knowledge about concomitant information from other modalities, and knowledge about the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. Resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. Instead I will sketch a picture of language processing from an embrained perspective. -
Hagoort, P. (2013). Beyond the language given: Language processing from an embrained perspective [Keynote Lecture]. Talk presented at The Architectures and Mechanisms for Language Processing (AMLaP 2013) conference. Marseille, France. 2013-09-02 - 2013-09-04.
-
Hagoort, P. (2013). Language networks in the brain [Invited lecture]. Talk presented at the 1st EFPSA Conference: From neuron to society. Amsterdam. 2013-11-22 - 2013-11-23.
-
Hagoort, P. (2013). Het talige brein [Invited Lecture]. Talk presented at Minicollege KNAW. Amsterdam. 2013-09-09.
-
Hagoort, P. (2013). Het valt niet mee een goede lezing te geven: Over brein en pragmatiek. Talk presented at Brein en letteren. KNAW. Amsterdam. 2013-04-10.
-
Hagoort, P. (2013). On Broca, brain and binding [Invited Lecture]. Talk presented at the 50th Anniversary symposium of the Dutch Neuropsychological Society (NVN). Nijmegen. 2013-11-01.
Share this page