Julia Chauvet

Presentations

Displaying 1 - 3 of 3
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, Netherlands.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Roos, N. M., Chauvet, J., & Piai, V. (2023). Electrophysiological properties of the concise language paradigm (CLaP): Benchmark and application in stroke neuroplasticity. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Studies investigating language commonly isolate one language modality/process, focusing on comprehension or production. We aim to combine both in the CLaP, tapping into comprehension and production within each trial. The trial structure is identical across conditions, presenting an auditory sentence (constrained, unconstrained, reversed) followed by a picture to name (normal, scrambled), reducing task-related confounds between conditions. Next to constrained and uncosntrained picture naming (context effect), we examined ERPs locked to sentence and picture onset comparing neural responses to auditory versus time-reversed speech as well as recognition of real versus scrambled objects. We tested 21 healthy speakers with EEG and replicated the context effect of power decreases in the alpha-beta frequency range (8-25 Hz) during the pre-picture interval in left hemisphere language areas. Picture-locked ERPs showed that visual evoked potentials (VEPs) significantly differ between conditions, especially in the P2 component (200-300ms). Sentence-locked ERPs revealed auditory evoked potentials (AEPs) in response to normal speech (240-400ms) after sentence onset, significantly differing from time-reversed speech. By virtue of the well-matched contrasts across conditions, the CLaP shows promising opportunities to be used with EEG to further investigate language comprehension and production, and their relationship, in a well-controlled setting in neurotypical and neurological populations.

Share this page