Antje Meyer

Presentations

Displaying 1 - 34 of 34
  • Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Bethke, S., Meyer, A. S., & Hintz, F. (2023). Developing the individual differences in language skills (IDLaS-DE) test battery—A new tool for German. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). When the beat drops – beat gestures recalibrate lexical stress perception. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the Donders Poster Session 2023, Nijmegen, The Netherlands.
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, Netherlands.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Corps, R. E., & Meyer, A. S. (2023). Repetition leads to long-term suppression of the word frequency effect. Talk presented at Psycholinguistics in Flanders (PiF 2023). Ghent, Belgium. 2023-05-29 - 2023-05-31.
  • Meyer, A. S., Schulz, F., & Hintz, F. (2023). Accounting for good enough conversational speech. Talk presented at the IndiPrag Workshop. Saarbruecken, Germany. 2023-09-18 - 2023-09-19.
  • Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2023). Fast and efficient or slow and struggling? Comparing the response times of errors and targets in speeded word production. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in disfluency production. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Producing spontaneous speech is challenging. It often contains disfluencies like repetitions, prolongations, silent pauses or filled pauses. Previous research has largely focused on the language-based factors (e.g., planning difficulties) underlying the production of these disfluencies. But research has also shown that some speakers are more disfluent than others. What cognitive mechanisms underlie this difference? We reanalyzed a behavioural dataset of 112 participants, who were assessed on a battery of tasks testing linguistic knowledge, processing speed, non-verbal IQ, working memory, and basic production skills and also produced six 1-minute samples of spontaneous speech (Hintz et al., 2020). We assessed the length and lexical diversity of participants’ speech and determined how often they produced silent pauses and filled pauses. We used network analysis, factor analysis and non-parametric regressions to investigate the relationship between these variables and individual differences in particular cognitive skills. We found that individual differences in linguistic knowledge or processing speed were not related to the production of disfluencies. In contrast, the proportion of filled pauses (relative to all words in the 1-minute narratives) correlated negatively with working memory capacity.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Tourtouri, E. N., & Meyer, A. S. (2023). If you hear something (don’t) say something: A dual-EEG study on sentence processing in conversational settings. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). The influence of contextual and talker F0 information on fricative perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). Listeners converge to fundamental frequency in synchronous speech. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Convergence broadly refers to interlocutors’ tendency to progressively sound more like each other over time. Recent empirical work has used various experimental paradigms to observe convergence in voice fundamental frequency (f0). One study used stable mean f0 over trials in a synchronous speech task with manipulated (i.e., high and low) f0 conditions (Bradshaw & McGettigan, 2021). Here, we attempted to replicate this study in Dutch. First, in a reading task, participants read 40 sentences at their own pace to establish f0 baselines. Later, in a synchronous speech task, participants read 80 sentences in synchrony with a speaker whose voice was manipulated ±2st above or below (i.e., for the high and low f0 conditions, respectively) a reference mean f0 value. The reference mean f0 value and the manipulation size were obtained across multiple pre-tests. Our results revealed that the f0 manipulation significantly predicted f0 convergence in both high f0 and low f0 conditions. Furthermore, the proportion of convergers in the sample was larger than those reported by Bradshaw & McGettigan, highlighting the benefits of stimulus optimization. Our study thus provides stronger evidence that the pitch of two talkers tends to converge as they speak together.
  • van der Burght, C. L., Schipperus, L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference during sentence production? A replication of Momma et al. (2020). Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • van der Burght, C. L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference effects during sentence production? A replication of Momma et al (2020). Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    The semantic interference effect in picture naming entails longer naming latencies for pictures presented with semantically related versus unrelated distractors. One factor suggested to influence the effect is word category. However, results have been inconclusive. Momma et al. (2020) used a sentence-picture interference paradigm where the sentence context (“her singing” or “she’s singing”) disambiguated the word category (noun or verb, respectively) of distractor and target, manipulating their word category match/mismatch. Semantic interference was only found when distractor and target belonged to the same word category, suggesting that syntactic category constrains lexical competition during sentence production. Considering this important theoretical conclusion, we conducted a preregistered replication study with Dutch participants, mirroring the design of the original study. In each of 2 experiments, 60 native speakers read sentences containing sentence-final distractor words that had to be interpreted as nouns or verbs, depending on the sentence context. Subsequently, they named target action pictures as either verbs (experiment 1) or nouns (experiment 2). Results of Experiment 1 showed a main effect of relatedness, suggesting a semantic interference effect regardless of word category. We discuss differences between the original and current study results with cross-linguistic differences in (de)compositional processing and frequency of distractor forms.
  • Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.

    Abstract

    Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia.
  • Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
  • Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-25 - 2016-05-27.
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension.
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2016). The contribution of vocabulary size to language processing: Evidence from lexical decision and picture-word interference. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Previous research indicates that general cognitive abilities, such as attention or executive control, contribute to language processing (Hartsuiker & Barkhuysen, 2006; Jongman et al., 2014; Shao et al., 2013). Potential effects of language-specific abilities, such as vocabulary, on language processing in adult native speakers have been examined less extensively. Goals: a) develop and assess measures of vocabulary size in Dutch native speakers, and b) investigate the relationship between individual differences in vocabulary and language processing.
  • Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker affects perception of another talker. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-24 - 2016-11-25.
  • Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. It has been suggested that this process of speech rate normalization occurs across talker changes. This would predict that the speech rate of talker A influences perception of the rate of another talker B. We assessed this hypothesis by testing effects of speech rate on the perception on the Dutch vowel continuum /A/-/a:/. One participant group was exposed to 'neutral' speech from talker A intermixed with fast speech from talker B. Another group listened to the same speech from talker A, but to slow speech from talker B. We observed a difference in perception of talker A depending on the speech rate of talker B: A's 'neutral' speech was perceived as slow when B spoke faster. These findings corroborate the idea that speech rate normalization occurs across talkers, but they challenge the assumption that listeners average over speech rates from multiple talkers. Instead, they suggest that listeners contrast talker-specific rates.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Talk presented at MPI Proudly Presents. Nijmegen, The Netherlands. 2016-06-01.
  • McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the Psychology Department, University of Geneva. Geneva, Italy. 2016-05-09.
  • Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.

    Abstract

    Natural conversations are characterized by smooth transitions of turns between interlocutors. For instance, speakers often respond to questions or requests within half a second. As planning the first word of an utterance can easily take a second or more, this suggests that utterance planning often overlaps with listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has recently been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. However, direct evidence for simultaneous listening and speech planning is scarce. I will first review studies demonstrating that both comprehending spoken utterances and planning them require processing capacity and that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the second part of the talk I will turn to studies examining directly when utterance planning in dialogue begins. These studies indicate that (regrettably) there are probably no hard-and-fast rules for the temporal coordination of listening and speech planning. I will argue that (regrettably again) we need models that are far more complex than Levinson and Torreira's proposal to understand how listening and speech planning are coordinated in conversation
  • Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
    Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
    Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.

Share this page