The neurobiology of language beyond the information given
A central and influential idea among researchers of language is that our language faculty is organized according to the principle of strict compositionality, which implies that the meaning of an utterances is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. The implication of this idea is that beyond word recognition, language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step the sentence meaning is integrated with information from prior discourse, with world knowledge, with information about the speaker, and with semantic information from extralinguistic domains such as co-speech gestures or the visual world. FMRI results and results from recordings of event related brain potentials will be presented that are inconsistent with this classical model of language intepretation. Our data support a model in which knowledge about the context and the world, knowledge about concomitant information from other modalities, and knowledge about the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. The Memory, Unification and Control (MUC) model provides a neurobiological plausible account of the underlying neural architecture. Resting state connectivity data, and results from Psycho-Physiological Interactions will be discussed, suggesting a division of labour between temporal and inferior frontal cortex. These results indicate that Broca’s area and adjacent cortex play an important role in semantic and syntactic unification operations. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. Instead I will sketch a picture of language processing from an embrained perspective.
Publication type
TalkPublication date
2013
Share this page