Antje Meyer

Publications

Displaying 1 - 20 of 20
  • Araújo, S., Huettig, F., & Meyer, A. S. (2021). What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements. Scientific Studies of Reading, 25(6), 534-549. doi:10.1080/10888438.2020.1867863.

    Abstract

    This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low-, but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
  • Bartolozzi, F., Jongman, S. R., & Meyer, A. S. (2021). Concurrent speech planning does not eliminate repetition priming from spoken words: Evidence from linguistic dual-tasking. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(3), 466-480. doi:10.1037/xlm0000944.

    Abstract

    In conversation, production and comprehension processes may overlap, causing interference. In 3 experiments, we investigated whether repetition priming can work as a supporting device, reducing costs associated with linguistic dual-tasking. Experiment 1 established the rate of decay of repetition priming from spoken words to picture naming for primes embedded in sentences. Experiments 2 and 3 investigated whether the rate of decay was faster when participants comprehended the prime while planning to name unrelated pictures. In all experiments, the primed picture followed the sentences featuring the prime on the same trial, or 10 or 50 trials later. The results of the 3 experiments were strikingly similar: robust repetition priming was observed when the primed picture followed the prime sentence. Thus, repetition priming was observed even when the primes were processed while the participants prepared an unrelated spoken utterance. Priming might, therefore, support utterance planning in conversation, where speakers routinely listen while planning their utterances.

    Additional information

    supplemental material
  • Brehm, L., & Meyer, A. S. (2021). Planning when to say: Dissociating cue use in utterance initiation using cross-validation. Journal of Experimental Psychology: General, 150(9), 1772-1799. doi:10.1037/xge0001012.

    Abstract

    In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner’s turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate’s utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt’s length, and whether within a block of trials, the confederate prompt’s length was predictable. We measured how these factors affected the gap between turns and the participants’ allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate’s stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner’s speech than corepresentation of their utterance content.
  • Decuyper, C., Brysbaert, M., Brodeur, M. B., & Meyer, A. S. (2021). Bank of Standardized Stimuli (BOSS): Dutch names for 1400 photographs. Journal of Cognition, 4(1): 33. doi:10.5334/joc.180.

    Abstract

    We present written naming norms from 153 young adult Dutch speakers for 1397 photographs (the BOSS set; see Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010; Brodeur, Guérard, & Bouras, 2014). From the norming study, we report the preferred (modal) name, alternative names, name agreement, and average object agreement. In addition, the data base includes Zipf frequency, word prevalence and Age of Acquisition for the modal picture names collected. Furthermore, we describe a subset of 359 photographs with very good name agreement and a subset of 35 photos with two common names. These sets may be particularly valuable for designing experiments. Though the participants typed the object names, comparisons with other datasets indicate that the collected norms are valuable for spoken naming studies as well.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Conducting language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Collabra: Psychology, 7(1): 29935. doi:10.1525/collabra.29935.

    Abstract

    Few web-based experiments have explored spoken language production, perhaps due to concerns of data quality, especially for measuring onset latencies. The present study highlights how speech production research can be done outside of the laboratory by measuring utterance durations and speech fluency in a multiple-object naming task when examining two effects related to lexical selection: semantic context and name agreement. A web-based modified blocked-cyclic naming paradigm was created, in which participants named a total of sixteen simultaneously presented pictures on each trial. The pictures were either four tokens from the same semantic category (homogeneous context), or four tokens from different semantic categories (heterogeneous context). Name agreement of the pictures was varied orthogonally (high, low). In addition to onset latency, five dependent variables were measured to index naming performance: accuracy, utterance duration, total pause time, the number of chunks (word groups pronounced without intervening pauses), and first chunk length. Bayesian analyses showed effects of semantic context and name agreement for some of the dependent measures, but no interaction. We discuss the methodological implications of the current study and make best practice recommendations for spoken language production research in an online environment.
  • He, J., Meyer, A. S., & Brehm, L. (2021). Concurrent listening affects speech planning and fluency: The roles of representational similarity and capacity limitation. Language, Cognition and Neuroscience, 36(10), 1258-1280. doi:10.1080/23273798.2021.1925130.

    Abstract

    In a novel continuous speaking-listening paradigm, we explored how speech planning was affected by concurrent listening. In Experiment 1, Dutch speakers named pictures with high versus low name agreement while ignoring Dutch speech, Chinese speech, or eight-talker babble. Both name agreement and type of auditory input influenced response timing and chunking, suggesting that representational similarity impacts lexical selection and the scope of advance planning in utterance generation. In Experiment 2, Dutch speakers named pictures with high or low name agreement while either ignoring Dutch words, or attending to them for a later memory test. Both name agreement and attention demand influenced response timing and chunking, suggesting that attention demand impacts lexical selection and the planned utterance units in each response. The study indicates that representational similarity and attention demand play important roles in linguistic dual-task interference, and the interference can be managed by adapting when and how to plan speech.

    Additional information

    supplemental material
  • Raviv, L., De Heer Kloots, M., & Meyer, A. S. (2021). What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability. Cognition, 210: 104620. doi:10.1016/j.cognition.2021.104620.

    Abstract

    Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.
  • Reifegerste, J., Meyer, A. S., Zwitserlood, P., & Ullman, M. T. (2021). Aging affects steaks more than knives: Evidence that the processing of words related to motor skills is relatively spared in aging. Brain and Language, 218: 104941. doi:10.1016/j.bandl.2021.104941.

    Abstract

    Lexical-processing declines are a hallmark of aging. However, the extent of these declines may vary as a function of different factors. Motivated by findings from neurodegenerative diseases and healthy aging, we tested whether ‘motor-relatedness’ (the degree to which words are associated with particular human body movements) might moderate such declines. We investigated this question by examining data from three experiments. The experiments were carried out in different languages (Dutch, German, English) using different tasks (lexical decision, picture naming), and probed verbs and nouns, in all cases controlling for potentially confounding variables (e.g., frequency, age-of-acquisition, imageability). Whereas ‘non-motor words’ (e.g., steak) showed age-related performance decreases in all three experiments, ‘motor words’ (e.g., knife) yielded either smaller decreases (in one experiment) or no decreases (in two experiments). The findings suggest that motor-relatedness can attenuate or even prevent age-related lexical declines, perhaps due to the relative sparing of neural circuitry underlying such words.

    Additional information

    supplementary material
  • San Jose, A., Roelofs, A., & Meyer, A. S. (2021). Modeling the distributional dynamics of attention and semantic interference in word production. Cognition, 211: 104636. doi:10.1016/j.cognition.2021.104636.

    Abstract

    In recent years, it has become clear that attention plays an important role in spoken word production. Some of this evidence comes from distributional analyses of reaction time (RT) in regular picture naming and picture-word interference. Yet we lack a mechanistic account of how the properties of RT distributions come to reflect attentional processes and how these processes may in turn modulate the amount of conflict between lexical representations. Here, we present a computational account according to which attentional lapses allow for existing conflict to build up unsupervised on a subset of trials, thus modulating the shape of the resulting RT distribution. Our process model resolves discrepancies between outcomes of previous studies on semantic interference. Moreover, the model's predictions were confirmed in a new experiment where participants' motivation to remain attentive determined the size and distributional locus of semantic interference in picture naming. We conclude that process modeling of RT distributions importantly improves our understanding of the interplay between attention and conflict in word production. Our model thus provides a framework for interpreting distributional analyses of RT data in picture naming tasks.
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S. (2004). The use of eye tracking in studies of sentence generation. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 191-212). Hove: Psychology Press.
  • Belke, E., & Meyer, A. S. (2002). Tracking the time course of multidimensional stimulus discrimination: Analyses of viewing patterns and processing times during "same''-"different'' decisions. European Journal of Cognitive Psychology, 14(2), 237-266. doi:10.1080/09541440143000050.

    Abstract

    We investigated the time course of conjunctive ''same''-''different'' judgements for visually presented object pairs by means of combined reaction time and on-line eye movement measurements. The analyses of viewing patterns, viewing times, and reaction times showed that participants engaged in a parallel self-terminating search for differences. In addition, the results obtained for objects differing in only one dimension suggest that processing times may depend on the relative codability of the stimulus dimensions. The results are reviewed in a broader framework in view of higher-order processes. We propose that overspecifications of colour, often found in object descriptions, may have an ''early'' visual rather than a ''late'' linguistic origin. In a parallel assessment of the detection materials, participants overspecified the objects' colour substantially more often than their size. We argue that referential overspecifications of colour are largely attributable to mechanisms of visual discrimination.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (2002). A theory of lexical access in speech production. In G. T. Altmann (Ed.), Psycholinguistics: critical concepts in psychology (pp. 278-377). London: Routledge.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (Ed.), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge.

Share this page