Language processing involves at least two functional subcomponents; the long-term storage of words (Mental Lexicon) and the integration of words into a sentence-level interpretation (Unification). These components interact during real-time processing (Control). In this collaborative project, I investigate how this core functionality is implemented in biological circuits. Insights are derived from experimental neuroscience, dynamical systems analysis, and the simulation of recurrent spiking networks for sentence comprehension.
Of particular interest are the memory characteristics of these networks. How are words encoded in neurobiological infrastructure? How are they maintained despite ongoing plasticity and noise? And how are they retrieved from partial cues? What is the nature of processing memory required for the rapid, context-dependent integration of meaning and how is hierarchical processing supported by diverse memory timescales? We also investigate how language networks are shaped by learning through neurobiological plasticity principles. These include neuronal adaptation, short-term synaptic facilitation, spike-timing-dependent plasticity, and consolidation. How do these mechanisms interact with the structural features of neurons and with network connectivity at various spatial scales to compute language functions?
The aim of this causal modelling approach is to gain a deeper understanding of how the human capacity for language is grounded in neural design.
Share this page