A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic and linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal or by internally-generated linguistic units, or by the interplay of both, remains contentious. We used naturalistic story-listening to investigate whether phoneme-level features are tracked over and above acoustic edges and whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features of a first language (Dutch) compared to a statistically-familiar but uncomprehended language (French). We show that encoding models with phoneme-level linguistic features uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally-generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units.
Publication type
PosterPublication date
2023
Share this page