Displaying 1 - 45 of 45
-
Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
-
Bethke, S., Meyer, A. S., & Hintz, F. (2023). Developing the individual differences in language skills (IDLaS-DE) test battery—A new tool for German. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). When the beat drops – beat gestures recalibrate lexical stress perception. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the Donders Poster Session 2023, Nijmegen, The Netherlands.
-
Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
Abstract
Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech. -
Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, Netherlands.
Abstract
Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
-
Corps, R. E., & Meyer, A. S. (2023). Repetition leads to long-term suppression of the word frequency effect. Talk presented at Psycholinguistics in Flanders (PiF 2023). Ghent, Belgium. 2023-05-29 - 2023-05-31.
-
Meyer, A. S., Schulz, F., & Hintz, F. (2023). Accounting for good enough conversational speech. Talk presented at the IndiPrag Workshop. Saarbruecken, Germany. 2023-09-18 - 2023-09-19.
-
Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2023). Fast and efficient or slow and struggling? Comparing the response times of errors and targets in speeded word production. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in disfluency production. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Producing spontaneous speech is challenging. It often contains disfluencies like repetitions, prolongations, silent pauses or filled pauses. Previous research has largely focused on the language-based factors (e.g., planning difficulties) underlying the production of these disfluencies. But research has also shown that some speakers are more disfluent than others. What cognitive mechanisms underlie this difference? We reanalyzed a behavioural dataset of 112 participants, who were assessed on a battery of tasks testing linguistic knowledge, processing speed, non-verbal IQ, working memory, and basic production skills and also produced six 1-minute samples of spontaneous speech (Hintz et al., 2020). We assessed the length and lexical diversity of participants’ speech and determined how often they produced silent pauses and filled pauses. We used network analysis, factor analysis and non-parametric regressions to investigate the relationship between these variables and individual differences in particular cognitive skills. We found that individual differences in linguistic knowledge or processing speed were not related to the production of disfluencies. In contrast, the proportion of filled pauses (relative to all words in the 1-minute narratives) correlated negatively with working memory capacity. -
Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Tourtouri, E. N., & Meyer, A. S. (2023). If you hear something (don’t) say something: A dual-EEG study on sentence processing in conversational settings. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). The influence of contextual and talker F0 information on fricative perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). Listeners converge to fundamental frequency in synchronous speech. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Convergence broadly refers to interlocutors’ tendency to progressively sound more like each other over time. Recent empirical work has used various experimental paradigms to observe convergence in voice fundamental frequency (f0). One study used stable mean f0 over trials in a synchronous speech task with manipulated (i.e., high and low) f0 conditions (Bradshaw & McGettigan, 2021). Here, we attempted to replicate this study in Dutch. First, in a reading task, participants read 40 sentences at their own pace to establish f0 baselines. Later, in a synchronous speech task, participants read 80 sentences in synchrony with a speaker whose voice was manipulated ±2st above or below (i.e., for the high and low f0 conditions, respectively) a reference mean f0 value. The reference mean f0 value and the manipulation size were obtained across multiple pre-tests. Our results revealed that the f0 manipulation significantly predicted f0 convergence in both high f0 and low f0 conditions. Furthermore, the proportion of convergers in the sample was larger than those reported by Bradshaw & McGettigan, highlighting the benefits of stimulus optimization. Our study thus provides stronger evidence that the pitch of two talkers tends to converge as they speak together. -
van der Burght, C. L., Schipperus, L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference during sentence production? A replication of Momma et al. (2020). Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
van der Burght, C. L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference effects during sentence production? A replication of Momma et al (2020). Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
The semantic interference effect in picture naming entails longer naming latencies for pictures presented with semantically related versus unrelated distractors. One factor suggested to influence the effect is word category. However, results have been inconclusive. Momma et al. (2020) used a sentence-picture interference paradigm where the sentence context (“her singing” or “she’s singing”) disambiguated the word category (noun or verb, respectively) of distractor and target, manipulating their word category match/mismatch. Semantic interference was only found when distractor and target belonged to the same word category, suggesting that syntactic category constrains lexical competition during sentence production. Considering this important theoretical conclusion, we conducted a preregistered replication study with Dutch participants, mirroring the design of the original study. In each of 2 experiments, 60 native speakers read sentences containing sentence-final distractor words that had to be interpreted as nouns or verbs, depending on the sentence context. Subsequently, they named target action pictures as either verbs (experiment 1) or nouns (experiment 2). Results of Experiment 1 showed a main effect of relatedness, suggesting a semantic interference effect regardless of word category. We discuss differences between the original and current study results with cross-linguistic differences in (de)compositional processing and frequency of distractor forms. -
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at Psycholinguistics in Flanders (PiF 2017), Leuven, Belgium.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at the Experimental Psychology Society Belfast Meeting, Belfast, UK.
Abstract
Conversation often involves simultaneous production and comprehension, yet little research has investigated whether these two processes interfere with one another. We tested participants’ ability to dual-task with production and comprehension tasks. Task one (production task) was picture naming. Task two (comprehension task) was either syllable identification (linguistic condition) or tone identification (non-linguistic condition). The two identification tasks were matched for difficulty. Three SOAs (50ms, 300ms, and 1800ms) resulted in different amounts of overlap between the production and comprehension tasks. We hypothesized that as production and comprehension use similar resources there would be greater interference with concurrent linguistic than non-linguistic tasks.
At the 50ms SOA, picture naming latencies were slower in the linguistic compared to the non-linguistic condition, suggesting that the resources required for production and comprehension overlap more in the linguistic condition. As the syllables were non-words without lexical representations, this interference likely occurs primarily at the phonological level. Across all SOAs, identification RTs were longer in the linguistic condition, showing that such phonological interference percolates through to the comprehension task, regardless of SOA. In sum, these results demonstrate that concurrent access to the phonological level in production and comprehension results in measurable interference in both speaking and comprehending.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Hoedemaker, R. S., & Meyer, A. S. (2017). Coordination and preparation of utterances in a joint-naming task. Talk presented at the Experimental Psychology Society Belfast Meeting. Belfast, UK. 2017-04-10 - 2017-04-12.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). “That’s a spatelhouder!”: How source memory is influenced by speakers’ social categories in a word-learning paradigm. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). Speakers' social identity affects source memory for novel words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Jongman, S. R., Meyer, A. S., & Piai, V. (2017). Brain signature of planning for production: An EEG study. Talk presented at the Workshop 'Revising formal semantic and pragmatic theories from a neurocognitive perspective'. Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Jongman, S. R., & Meyer, A. S. (2017). Simultaneous listening and planning for production: Full or partial comprehension?. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). When slow speech sounds fast: How the speech rate of one talker influences perception of another talker. Talk presented at the IPS workshop: Abstraction, Diversity, and Speech Dynamics. Herrsching am Ammersee, Germany. 2017-05-03 - 2017-05-05.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. This process of rate-dependent speech perception has been suggested to occur across talker changes, with the speech rate of talker A influencing perception of talker B. This study tested whether a ‘global’ speech rate calculated over multiple talkers and over a longer period of time affected perception of the temporal Dutch vowel contrast /ɑ/-/a:/. First, Experiment 1 demonstrated that listeners more often reported hearing long /a:/ in fast contexts than in ‘neutral rate’ contexts, replicating earlier findings. Then, in Experiment 2, one participant group was exposed to ‘neutral’ speech from talker A intermixed with slow speech from talker B. Another group listened to the same ‘neutral’ speech from talker A, but to fast speech from talker B. Between-group comparison in the ‘neutral’ condition revealed that Group 1 reported more long /a:/ than Group 2, indicating that A’s ‘neutral’ speech sounded faster when B was slower. Finally, Experiment 3 tested whether talking at slow or fast rates oneself elicits the same ‘global’ rate effects. However, no evidence was found that self-produced speech modulated perception of talker A. This study corroborates the idea that ‘global’ rate-dependent effects occur across talkers, but are insensitive to one’s own speech rate. Results are interpreted in light of the general auditory mechanisms thought to underlie rate normalization, with implications for our understanding of dialogue.Additional information
http://www.phonetik.uni-muenchen.de/institut/veranstaltungen/abstraction-divers… -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech affects perception depends on who is talking. Poster presented at the Donders Poster Sessions, Nijmegen, The Netherlands.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /A/-/a:/. In Experiment 1, one low-rate group listened to ‘neutral’ rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the ‘neutral’ trials revealed that the low-rate group reported a higher proportion of /a:/ in A’s ‘neutral’ speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one’s own speech rate also contributes to effects of long-term tracking of rate. Here, talker B’s speech was replaced by playback of participants’ own fast or slow speech. No evidence was found that one’s own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. Poster presented at Interspeech 2017, Stockholm, Sweden.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Meyer, A. S., Decuyper, C., & Coopmans, C. W. (2017). Distribution of attention in question-answer sequences: Evidence for limited parallel processing. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2017-01-03 - 2017-01-06.
-
Meyer, A. S. (2017). Towards understanding conversation: A psycholinguist's perspective. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the Inaugural Cultural Evolution Society Conference (CESC 2017). Jena, Germany. 2017-09-13 - 2017-09-15.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the 30th Annual CUNY Conference on Human Sentence Processing. Cambridge, MA, USA. 2017-03-30 - 2017-04-01.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). The role of community size in the emergence of linguistic structure. Talk presented at XLanS: Triggers of language change in the Language Sciences. Lyon, France. 2017-10-11 - 2017-10-13.
-
Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2017). How we regulate speech rate: Phonetic evidence for a 'gain strategy' in speech planning. Poster presented at the Abstraction, Diversity and Speech Dynamics Workshop, Herrsching, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2017). Simulating speaking rate control: A spreading activation model of syllable timing. Poster presented at the Workshop Conversational speech and lexical representations, Nijmegen, The Netherlands.
Abstract
Speech can be produced at different rates. The ability to produce faster or slower speech may be thought to result from executive control processes enlisted to modulate lexical selection and phonological encoding stages of speech planning.
This study used simulations of the model of serial order in language by Dell, Burger and Svec (1997, DBS) to characterise the strategies adopted by speakers when naming pictures at fast, medium and slow prescribed rates. Our new implementation of DBS was able to produce activation patterns that correlated strongly with observed syllable-level timing of disyllabic words from this task.
For each participant, different speaking rates were associated with different regions of the DBS parameter space. The precise placement of the speaking rates in the parameter space differed markedly between participants. Participants applied broadly the same parameter manipulation to accelerate their speech. This was however not the case for deceleration. Hierarchical clustering revealed two distinct patterns of parameter adjustment employed to decelerate speech, suggesting that deceleration is not necessarily achieved by the inverse process of acceleration. In addition, potential refinements to the DBS model are discussed. -
Shao, Z., & Meyer, A. S. (2017). How word and phrase frequencies affect noun phrase production. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2017). Combining Virtual Reality and EEG to study semantic and pragmatic processing in a naturalistic environment. Talk presented at the workshop 'Revising formal Semantic and Pragmatic theories from a Neurocognitive Perspective' (NeuroPragSem, 2017). Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Van Paridon, J., Roelofs, A., & Meyer, A. S. (2017). Coordinating simultaneous comprehension and production: Behavioral and modelling findings from shadowing and simultaneous interpreting. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Weber, K., Meyer, A. S., & Hagoort, P. (2017). Learning lexical-syntactic biases: An fMRI study on how we connect words and structures. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
-
Zormpa, E., Hoedemaker, R. S., Brehm, L., & Meyer, A. S. (2017). The production and generation effect in picture naming: How lexical access and articulation influence memory. Poster presented at the Donders Posters Session, Nijmegen, The Netherlands.
Share this page