The syllable frequency effect before and after speaking
Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
Publication type
PosterPublication date
2023
Share this page