Displaying 1 - 35 of 35
-
Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
-
Bethke, S., Meyer, A. S., & Hintz, F. (2023). Developing the individual differences in language skills (IDLaS-DE) test battery—A new tool for German. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). When the beat drops – beat gestures recalibrate lexical stress perception. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
-
Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the Donders Poster Session 2023, Nijmegen, The Netherlands.
-
Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
Abstract
Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech. -
Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, Netherlands.
Abstract
Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
-
Corps, R. E., & Meyer, A. S. (2023). Repetition leads to long-term suppression of the word frequency effect. Talk presented at Psycholinguistics in Flanders (PiF 2023). Ghent, Belgium. 2023-05-29 - 2023-05-31.
-
Meyer, A. S., Schulz, F., & Hintz, F. (2023). Accounting for good enough conversational speech. Talk presented at the IndiPrag Workshop. Saarbruecken, Germany. 2023-09-18 - 2023-09-19.
-
Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2023). Fast and efficient or slow and struggling? Comparing the response times of errors and targets in speeded word production. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in disfluency production. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Producing spontaneous speech is challenging. It often contains disfluencies like repetitions, prolongations, silent pauses or filled pauses. Previous research has largely focused on the language-based factors (e.g., planning difficulties) underlying the production of these disfluencies. But research has also shown that some speakers are more disfluent than others. What cognitive mechanisms underlie this difference? We reanalyzed a behavioural dataset of 112 participants, who were assessed on a battery of tasks testing linguistic knowledge, processing speed, non-verbal IQ, working memory, and basic production skills and also produced six 1-minute samples of spontaneous speech (Hintz et al., 2020). We assessed the length and lexical diversity of participants’ speech and determined how often they produced silent pauses and filled pauses. We used network analysis, factor analysis and non-parametric regressions to investigate the relationship between these variables and individual differences in particular cognitive skills. We found that individual differences in linguistic knowledge or processing speed were not related to the production of disfluencies. In contrast, the proportion of filled pauses (relative to all words in the 1-minute narratives) correlated negatively with working memory capacity. -
Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Tourtouri, E. N., & Meyer, A. S. (2023). If you hear something (don’t) say something: A dual-EEG study on sentence processing in conversational settings. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). The influence of contextual and talker F0 information on fricative perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
-
Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). Listeners converge to fundamental frequency in synchronous speech. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Convergence broadly refers to interlocutors’ tendency to progressively sound more like each other over time. Recent empirical work has used various experimental paradigms to observe convergence in voice fundamental frequency (f0). One study used stable mean f0 over trials in a synchronous speech task with manipulated (i.e., high and low) f0 conditions (Bradshaw & McGettigan, 2021). Here, we attempted to replicate this study in Dutch. First, in a reading task, participants read 40 sentences at their own pace to establish f0 baselines. Later, in a synchronous speech task, participants read 80 sentences in synchrony with a speaker whose voice was manipulated ±2st above or below (i.e., for the high and low f0 conditions, respectively) a reference mean f0 value. The reference mean f0 value and the manipulation size were obtained across multiple pre-tests. Our results revealed that the f0 manipulation significantly predicted f0 convergence in both high f0 and low f0 conditions. Furthermore, the proportion of convergers in the sample was larger than those reported by Bradshaw & McGettigan, highlighting the benefits of stimulus optimization. Our study thus provides stronger evidence that the pitch of two talkers tends to converge as they speak together. -
van der Burght, C. L., Schipperus, L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference during sentence production? A replication of Momma et al. (2020). Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
-
van der Burght, C. L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference effects during sentence production? A replication of Momma et al (2020). Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
The semantic interference effect in picture naming entails longer naming latencies for pictures presented with semantically related versus unrelated distractors. One factor suggested to influence the effect is word category. However, results have been inconclusive. Momma et al. (2020) used a sentence-picture interference paradigm where the sentence context (“her singing” or “she’s singing”) disambiguated the word category (noun or verb, respectively) of distractor and target, manipulating their word category match/mismatch. Semantic interference was only found when distractor and target belonged to the same word category, suggesting that syntactic category constrains lexical competition during sentence production. Considering this important theoretical conclusion, we conducted a preregistered replication study with Dutch participants, mirroring the design of the original study. In each of 2 experiments, 60 native speakers read sentences containing sentence-final distractor words that had to be interpreted as nouns or verbs, depending on the sentence context. Subsequently, they named target action pictures as either verbs (experiment 1) or nouns (experiment 2). Results of Experiment 1 showed a main effect of relatedness, suggesting a semantic interference effect regardless of word category. We discuss differences between the original and current study results with cross-linguistic differences in (de)compositional processing and frequency of distractor forms. -
Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).
Abstract
When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410. -
Konopka, A. E., Van de Velde, M., & Meyer, A. S. (2012). Mapping “easy” and “hard” messages onto language: Conceptual and structural variables jointly affect the timecourse of sentence formulation. Poster presented at the 18th Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
Abstract
Sentence formulation requires mapping pre-verbal messages onto linguistic structures. This message-to-language
mapping is often evaluated in eye-tracking tasks where speakers describe pictured events (The dog chased the mailman).
Speakers can begin sentence formulation by quickly selecting the first-fixated character as the sentential starting point
(lexical incrementality), or generating a rudimentary sentence plan based on their construal of the event gist before
selecting a starting point (hierarchical incrementality; Kuchinsky & Bock, 2010). Lexical incrementality predicts fast
divergence of fixations while hierarchical incrementality predicts slower divergence of fixations to the two characters
within 200ms of picture onset. -
Lesage, E., Morgan, B., Olson, A., Meyer, A. S., & Miall, R. (2012). Disruption of right cerebellum with rTMS blocks predictive language processing. Poster presented at the 42nd annual meeting of the Society for Neuroscience [Neuroscience 2012] Poster# 379.07/UU5, New Orleans, LA.
Abstract
Much evidence demonstrates cerebellar involvement in language [1] but a theoretical framework about its precise role is lacking. In cerebellar motor control an influential model ascribes the cerebellum a predictive role [2]. It has been argued that cerebellar nonmotor regions perform similar computations as motor regions, and both are involved in online prediction [2]. We test this hypothesis by administering repetitive transcranial magnetic stimulation (rTMS) to the right cerebellum, a region implicated in language [3] during a predictive language task. Methods Visual World task [4]: Participants' eye movements were recorded while they listened to sentences and looked at a computer display of an agent and 4 objects, one of which (the target) was mentioned in the sentence. In the Prediction condition the object could be predicted on the basis of the verb; on Control trials it could not. We hypothesised that rTMS to the right cerebellum should make target fixation slower in the Prediction condition, but not in the Control condition. TMS protocol: TMS was delivered between two task blocks. In the cerebellar rTMS group (n = 22) the stimulation site was 1cm down and 3cm right of the inion. Participants received 10min of 1Hz rTMS. In addition, we tested two control groups. In the vertex rTMS group (n = 21), rTMS was applied at the same intensity, duration and frequency as in the cerebellar rTMS group, but over the vertex. In the no stimulation group (n = 22) the coil was placed over the cerebellar stimulation site but no pulses were delivered. Results As hypothesised, participants in the cerebellar rTMS group took longer to fixate the target after TMS in the Prediction condition but not in the Control condition (Block-by-Condition interaction: F(1,21) = 8.848, p = 0.007). This interaction was not found in either the vertex rTMS group (F(1,20) = 0.064, p = 0.802) or the no stimulation group (F(1,21) = 2.461, p = 0.132). Conclusions Here, we show that rTMS to the right cerebellum selectively affects linguistic prediction. These results provide additional evidence that the cerebellum plays a role in language and support theoretical accounts that the cerebellum contributes to nonmotor functions, as it does to motor functions, by online prediction. 1. Strick et al (2009). Cerebellum and nonmotor function. Annu Rev Neurosci, 32, 413-134 2. Miall et al (1993). Is the cerebellum a Smith predictor. J Mot Behav, 25, 203-216 3. Marien et al (2001). The lateralised linguistic cerebellum: a review and a new hypothesis. Brain and Language, 79, 580-600 4. Altmann & Kamide (1999). Incremental interpretation at verbs. Cognition, 73, 247-264 -
Meyer, A. S. (2012). What's in it for me? What's in it for me? Applying adult speech production models to young learners. Talk presented at a workshop at the University of Leiden. Leiden, The Netherlands. 2012-12.
-
Moers, C., Meyer, A. S., & Janse, E. (2012). Effects of transitional probabilities on word durations in read speech of younger & older speakers. Talk presented at the Workshop Fluent Speech: Combining Cognitive and Educational Approaches, Utrecht Institute of Linguistics. Utrecht, The Netherlands. 2012-11-12 - 2012-11-13.
-
Reifegerste, J., & Meyer, A. S. (2012). The influence of age on the mental representation of polymorphemic words in Dutch. Talk presented at the Conference on Morphological Complexity. London, UK. 2012-01-13 - 2012-01-15.
-
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
-
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
-
Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.
Abstract
In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates. -
Sjerps, M. J., & Meyer, A. S. (2012). Variation in cognitive demands across turn-taking. Poster presented at the 7th International Workshop on Language Production (IWOLP 2012), New York, United States.
-
Van de Velde, M., Konopka, A. E., & Meyer, A. S. (2012). Relative clause processing: Linking clause frequency and reading experience. Poster presented at the 11th Psycholinguistics in Flanders Conference [PIF 2012], Nijmegen, the Netherlands.
-
Veenstra, A., Acheson, D. J., Bock, K., & Meyer, A. S. (2012). Conceptual and grammatical factors in the production of subject-verb agreement. Poster presented at the 7th International Workshop on Language Production (IWOLP 2012), New York, United States.
-
Veenstra, A., Acheson, D. J., & Meyer, A. S. (2012). Conceptual and grammatical factors in the production of subject-verb agreement. Talk presented at The 11th edition of the Psycholinguistics in Flanders conference (PiF). Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
-
Veenstra, A., Acheson, D. J., & Meyer, A. S. (2012). Life after the spoken preamble completion paradigm. Talk presented at the 33th TABU Dag. Groningen, The Netherlands. 2012-06-18 - 2012-06-19.
Share this page