Displaying 1 - 24 of 24
-
Arana, S., Hagoort, P., Schoffelen, J.-M., & Rabovsky, M. (2024). Perceived similarity as a window into representations of integrated sentence meaning. Behavior Research Methods, 56(3), 2675-2691. doi:10.3758/s13428-023-02129-x.
Abstract
When perceiving the world around us, we are constantly integrating pieces of information. The integrated experience consists of more than just the sum of its parts. For example, visual scenes are defined by a collection of objects as well as the spatial relations amongst them and sentence meaning is computed based on individual word semantic but also syntactic configuration. Having quantitative models of such integrated representations can help evaluate cognitive models of both language and scene perception. Here, we focus on language, and use a behavioral measure of perceived similarity as an approximation of integrated meaning representations. We collected similarity judgments of 200 subjects rating nouns or transitive sentences through an online multiple arrangement task. We find that perceived similarity between sentences is most strongly modulated by the semantic action category of the main verb. In addition, we show how non-negative matrix factorization of similarity judgment data can reveal multiple underlying dimensions reflecting both semantic as well as relational role information. Finally, we provide an example of how similarity judgments on sentence stimuli can serve as a point of comparison for artificial neural networks models (ANNs) by comparing our behavioral data against sentence similarity extracted from three state-of-the-art ANNs. Overall, our method combining the multiple arrangement task on sentence stimuli with matrix factorization can capture relational information emerging from integration of multiple words in a sentence even in the presence of strong focus on the verb. -
Arana, S., Pesnot Lerousseau, J., & Hagoort, P. (2024). Deep learning models to study sentence comprehension in the human brain. Language, Cognition and Neuroscience, 39(8), 972-990. doi:10.1080/23273798.2023.2198245.
Abstract
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.Additional information
link to preprint -
Bulut, T., & Hagoort, P. (2024). Contributions of the left and right thalami to language: A meta-analytic approach. Brain Structure & Function, 229, 2149-2166. doi:10.1007/s00429-024-02795-3.
Abstract
Background: Despite a pervasive cortico-centric view in cognitive neuroscience, subcortical structures including the thalamus have been shown to be increasingly involved in higher cognitive functions. Previous structural and functional imaging studies demonstrated cortico-thalamo-cortical loops which may support various cognitive functions including language. However, large-scale functional connectivity of the thalamus during language tasks has not been examined before. Methods: The present study employed meta-analytic connectivity modeling to identify language-related coactivation patterns of the left and right thalami. The left and right thalami were used as regions of interest to search the BrainMap functional database for neuroimaging experiments with healthy participants reporting language-related activations in each region of interest. Activation likelihood estimation analyses were then carried out on the foci extracted from the identified studies to estimate functional convergence for each thalamus. A functional decoding analysis based on the same database was conducted to characterize thalamic contributions to different language functions. Results: The results revealed bilateral frontotemporal and bilateral subcortical (basal ganglia) coactivation patterns for both the left and right thalami, and also right cerebellar coactivations for the left thalamus, during language processing. In light of previous empirical studies and theoretical frameworks, the present connectivity and functional decoding findings suggest that cortico-subcortical-cerebellar-cortical loops modulate and fine-tune information transfer within the bilateral frontotemporal cortices during language processing, especially during production and semantic operations, but also other language (e.g., syntax, phonology) and cognitive operations (e.g., attention, cognitive control). Conclusion: The current findings show that the language-relevant network extends beyond the classical left perisylvian cortices and spans bilateral cortical, bilateral subcortical (bilateral thalamus, bilateral basal ganglia) and right cerebellar regions.Additional information
supplementary information -
Fitz, H., Hagoort, P., & Petersson, K. M. (2024). Neurobiological causal models of language processing. Neurobiology of Language, 5(1), 225-247. doi:10.1162/nol_a_00133.
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the “machine language” of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language. -
Forkel, S. J., & Hagoort, P. (2024). Redefining language networks: Connectivity beyond localised regions. Brain Structure & Function, 229, 2073-2078. doi:10.1007/s00429-024-02859-4.
-
Giglio, L., Ostarek, M., Sharoh, D., & Hagoort, P. (2024). Diverging neural dynamics for syntactic structure building in naturalistic speaking and listening. Proceedings of the National Academy of Sciences of the United States of America, 121(11): e2310766121. doi:10.1073/pnas.2310766121.
Abstract
The neural correlates of sentence production have been mostly studied with constraining task paradigms that introduce artificial task effects. In this study, we aimed to gain a better understanding of syntactic processing in spontaneous production vs. naturalistic comprehension. We extracted word-by-word metrics of phrase-structure building with top-down and bottom-up parsers that make different hypotheses about the timing of structure building. In comprehension, structure building proceeded in an integratory fashion and led to an increase in activity in posterior temporal and inferior frontal areas. In production, structure building was anticipatory and predicted an increase in activity in the inferior frontal gyrus. Newly developed production-specific parsers highlighted the anticipatory and incremental nature of structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, the results showed that the unfolding of syntactic processing diverges between speaking and listening. -
Giglio, L., Sharoh, D., Ostarek, M., & Hagoort, P. (2024). Connectivity of fronto-temporal regions in syntactic structure building during speaking and listening. Neurobiology of Language, 5(4), 922-941. doi:10.1162/nol_a_00154.
Abstract
The neural infrastructure for sentence production and comprehension has been found to be mostly shared. The same regions are engaged during speaking and listening, with some differences in how strongly they activate depending on modality. In this study, we investigated how modality affects the connectivity between regions previously found to be involved in syntactic processing across modalities. We determined how constituent size and modality affected the connectivity of the pars triangularis of the left inferior frontal gyrus (LIFG) and of the left posterior temporal lobe (LPTL) with the pars opercularis of the LIFG, the anterior temporal lobe (LATL) and the rest of the brain. We found that constituent size reliably increased the connectivity across these frontal and temporal ROIs. Connectivity between the two LIFG regions and the LPTL was enhanced as a function of constituent size in both modalities, and it was upregulated in production possibly because of linearization and motor planning in the frontal cortex. The connectivity of both ROIs with the LATL was lower and only enhanced for larger constituent sizes, suggesting a contributing role of the LATL in sentence processing in both modalities. These results thus show that the connectivity among fronto-temporal regions is upregulated for syntactic structure building in both sentence production and comprehension, providing further evidence for accounts of shared neural resources for sentence-level processing across modalities.Additional information
supplementary information -
Giglio, L., Hagoort, P., & Ostarek, M. (2024). Neural encoding of semantic structures during sentence production. Cerebral Cortex, 34(12): bhae482. doi:10.1093/cercor/bhae482.
Abstract
The neural representations for compositional processing have so far been mostly studied during sentence comprehension. In an fMRI study of sentence production, we investigated the brain representations for compositional processing during speaking. We used a rapid serial visual presentation sentence recall paradigm to elicit sentence production from the conceptual memory of an event. With voxel-wise encoding models, we probed the specificity of the compositional structure built during the production of each sentence, comparing an unstructured model of word meaning without relational information with a model that encodes abstract thematic relations and a model encoding event-specific relational structure. Whole-brain analyses revealed that sentence meaning at different levels of specificity was encoded in a large left frontal-parietal-temporal network. A comparison with semantic structures composed during the comprehension of the same sentences showed similarly distributed brain activity patterns. An ROI analysis over left fronto-temporal language parcels showed that event-specific relational structure above word-specific information was encoded in the left inferior frontal gyrus. Overall, we found evidence for the encoding of sentence meaning during sentence production in a distributed brain network and for the encoding of event-specific semantic structures in the left inferior frontal gyrus.Additional information
supplementary information -
Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.
Abstract
Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation. -
Murphy, E., Rollo, P. S., Segaert, K., Hagoort, P., & Tandon, N. (2024). Multiple dimensions of syntactic structure are resolved earliest in posterior temporal cortex. Progress in Neurobiology, 241: 102669. doi:10.1016/j.pneurobio.2024.102669.
Abstract
How we combine minimal linguistic units into larger structures remains an unresolved topic in neuroscience. Language processing involves the abstract construction of ‘vertical’ and ‘horizontal’ information simultaneously (e.g., phrase structure, morphological agreement), but previous paradigms have been constrained in isolating only one type of composition and have utilized poor spatiotemporal resolution. Using intracranial recordings, we report multiple experiments designed to separate phrase structure from morphosyntactic agreement. Epilepsy patients (n = 10) were presented with auditory two-word phrases grouped into pseudoword-verb (‘trab run’) and pronoun-verb either with or without Person agreement (‘they run’ vs. ‘they runs’). Phrase composition and Person violations both resulted in significant increases in broadband high gamma activity approximately 300ms after verb onset in posterior middle temporal gyrus (pMTG) and posterior superior temporal sulcus (pSTS), followed by inferior frontal cortex (IFC) at 500ms. While sites sensitive to only morphosyntactic violations were distributed, those sensitive to both composition types were generally confined to pSTS/pMTG and IFC. These results indicate that posterior temporal cortex shows the earliest sensitivity for hierarchical linguistic structure across multiple dimensions, providing neural resources for distinct windows of composition. This region is comprised of sparsely interwoven heterogeneous constituents that afford cortical search spaces for dissociable syntactic relations. -
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.
Abstract
During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.Additional information
link to preprint -
Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.
Abstract
Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity. -
Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.
Abstract
There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.Additional information
supplementary materials -
Zora, H., Bowin, H., Heldner, M., Riad, T., & Hagoort, P. (2024). The role of pitch accent in discourse comprehension and the markedness of Accent 2 in Central Swedish. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 921-925). doi:10.21437/SpeechProsody.2024-186.Abstract
In Swedish, words are associated with either of two pitch contours known as Accent 1 and Accent 2. Using a psychometric test, we investigated how listeners judge pitch accent violations while interpreting discourse. Forty native speakers of Central Swedish were presented with auditory dialogues, where test words were appropriately or inappropriately accented in a given context, and asked to judge the correctness of sentences containing the test words. Data indicated a statistically significant effect of wrong accent pattern on the correctness judgment. Both Accent 1 and Accent 2 violations interfered with the coherent interpretation of discourse and were judged as incorrect by the listeners. Moreover, there was a statistically significant difference in the perceived correctness between the accent patterns. Accent 2 violations led to a lower correctness score compared to Accent 1 violations, indicating that the listeners were more sensitive to pitch accent violations in Accent 2 words than in Accent 1 words. This result is in line with the notion that Accent 2 is marked and lexically represented in Central Swedish. Taken together, these findings indicate that listeners use both Accent 1 and Accent 2 to arrive at the correct interpretation of the linguistic input, while assigning varying degrees of relevance to them depending on their markedness. -
Hagoort, P. (2007). The memory, unification, and control (MUC) model of language. In T. Sakamoto (
Ed. ), Communicating skills of intention (pp. 259-291). Tokyo: Hituzi Syobo. -
Hagoort, P. (2007). The memory, unification, and control (MUC) model of language. In A. S. Meyer, L. Wheeldon, & A. Krott (
Eds. ), Automaticity and control in language processing (pp. 243-270). Hove: Psychology Press. -
Hagoort, P., & Van Berkum, J. J. A. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences, 362, 801-811.
Abstract
A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation. -
Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.
Abstract
In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other. -
Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.
Abstract
During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures. -
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (
Eds. ), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press. -
Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.
Abstract
Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners. -
Wassenaar, M., & Hagoort, P. (2007). Thematic role assignment in patients with Broca's aphasia: Sentence-picture matching electrified. Neuropsychologia, 45(4), 716-740. doi:10.1016/j.neuropsychologia.2006.08.016.
Abstract
An event-related brain potential experiment was carried out to investigate on-line thematic role assignment during sentence–picture matching in patients with Broca's aphasia. Subjects were presented with a picture that was followed by an auditory sentence. The sentence either matched the picture or mismatched the visual information depicted. Sentences differed in complexity, and ranged from simple active semantically irreversible sentences to passive semantically reversible sentences. ERPs were recorded while subjects were engaged in sentence–picture matching. In addition, reaction time and accuracy were measured. Three groups of subjects were tested: Broca patients (N = 10), non-aphasic patients with a right hemisphere (RH) lesion (N = 8), and healthy aged-matched controls (N = 15). The results of this study showed that, in neurologically unimpaired individuals, thematic role assignment in the context of visual information was an immediate process. This in contrast to patients with Broca's aphasia who demonstrated no signs of on-line sensitivity to the picture–sentence mismatches. The syntactic contribution to the thematic role assignment process seemed to be diminished given the reduction and even absence of P600 effects. Nevertheless, Broca patients showed some off-line behavioral sensitivity to the sentence–picture mismatches. The long response latencies of Broca's aphasics make it likely that off-line response strategies were used. -
Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action: The neural integration of gesture and speech. Cerebral Cortex, 17(10), 2322-2333. doi:10.1093/cercor/bhl141.
Abstract
Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system. -
Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.
Abstract
Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
Share this page