Displaying 1 - 37 of 37
-
Alday, P. M., & Meyer, A. S. (2019). Conversation as a competitive sport. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2019). Moscow, Russia. 2019-09-06 - 2019-09-08.
-
Bartolozzi, F., Jongman, S. R., & Meyer, A. S. (2019). Divided attention from speech-planning does not eliminate repetition priming from spoken words: Evidence from a dual-task paradigm. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
-
Brehm, L., & Meyer, A. S. (2019). Coordinating speech in conversation relies on expectations of timing and content. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
-
Favier, S., Meyer, A. S., & Huettig, F. (2019). Does literacy predict individual differences in syntactic processing?. Talk presented at the International Workshop on Literacy and Writing systems: Cultural, Neuropsychological and Psycholinguistic Perspectives. Haifa, Israel. 2019-02-18 - 2019-02-20.
-
Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
-
De Heer Kloots, M., Raviv, L., & Meyer, A. S. (2019). Memory and generalization: How do group size, structure and learnability relate in lab-evolved artificial languages?. Talk presented at the Culture Conference 2019: Communication in Culture. Stirling, UK. 2019-07-01 - 2019-07-02.
-
Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2019). Assessing individual differences in language processing: A novel research tool. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.
Abstract
Individual differences in language processing are prevalent in our daily lives. However, for decades, psycholinguistic research has largely ignored variation in the normal range of abilities. Recently, scientists have begun to acknowledge the importance of inter-individual variability for a comprehensive characterization of the language system. In spite of this change of attitude, empirical research on individual differences is still sparse, which is in part due to the lack of a suitable research tool. Here, we present a novel battery of behavioral tests for assessing individual differences in language skills in younger adults. The Dutch prototype comprises 29 subtests and assesses many aspects of language knowledge (grammar and vocabulary), linguistic processing skills (word and sentence level) and general cognitive abilities involved in using language (e.g., WM, IQ). Using the battery, researchers can determine performance profiles for individuals and link them to neurobiological or genetic data. -
Kaufeld, G., Bosker, H. R., Alday, P. M., Meyer, A. S., & Martin, A. E. (2019). A timescale-specific hierarchy in cortical oscillations during spoken language comprehension. Poster presented at Language and Music in Cognition: Integrated Approaches to Cognitive Systems (Spring School 2019), Cologne, Germany.
-
Kaufeld, G., Bosker, H. R., Alday, P. M., Meyer, A. S., & Martin, A. E. (2019). Structure and meaning entrain neural oscillations: A timescale-specific hierarchy. Poster presented at the 26th Annual meeting of the Cognitive Neuroscience Society (CNS 2019), San Francisco, CA, USA.
-
Meyer, A. S. (2019). A cognitive psychologist’s view of conversation. Talk presented at the Institute of Language, Cognition, and the Brain. Aix Marseille, France. 2019-04-26.
-
Meyer, A. S., & Jongman, S. R. (2019). Why conversations are easy to hold and hard to study [keynote]. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2019). Moscow, Russia. 2019-09-06 - 2019-09-08.
-
Meyer, A. S. (2019). Towards processing theories of conversation. Talk presented at the Leiden University Centre for Linguistics. Leiden, The Netherlands. 2019-06-07.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Input variability promotes the emergence of linguistic structure. Poster presented at the Inaugural workshop of the Center for the Interdisciplinary Study of Language Evolution (ISLE), Zürich, Switzerland.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Cognitive Science Department Colloquium Series, Haifa University. Haifa, Israel. 2019-04-07.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Language, Memory, and Attention group, Cognitive Department Colloquium Series, Royal Holloway, University of London. London, UK. 2019-06-20.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Psychology Department, Hebrew University of Jerusalem. Jerusalem, Israel. 2019-04-04.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at the 3rd Phonetics and Phonology in Europe conference (PaPe 2019), Lecce, Italy.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
Abstract
It is evident that speakers can freely vary stylistic features of their speech, such as speech rate, but how they accomplish this has hardly been studied, let alone implemented in a formal model of speech production. Much as in walking and running, where qualitatively different gaits are required cover the gamut of different speeds, we might predict there to be multiple qualitatively distinct configurations, or ‘gaits’, in the speech planning system that speakers must switch between to alter their speaking rate or style. Alternatively, control might involve continuous modulation of a single ‘gait’. We investigate these possibilities by simulation of a connectionist computational model which mimics the temporal characteristics of observed speech. Different ‘regimes’ (combinations of parameter settings) can be engaged to achieve different speaking rates.
The model was trained separately for each speaking rate, by an evolutionary optimisation algorithm. The training identified parameter values that resulted in the model to best approximate syllable duration distributions characteristic of each speaking rate.
In one gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In parameter space, they would be arranged along a straight line. Different points along this axis correspond to different speaking rates. In a multiple gait system, this linearity would be missing. Instead, the arrangement of the regimes would be triangular, with no obvious relationship between the regions associated with each gait, and an abrupt shift in parameter values to move from speeds associated with ‘walk-speaking’ to ‘run-speaking’.
Our model achieved good fits in all three speaking rates. In parameter space, the arrangement of the parameter settings selected for the different speaking rates is non-axial, suggesting that ‘gaits’ are present in the speech planning system. -
San Jose, A., Roelofs, A., & Meyer, A. S. (2019). Lapses of attention explain the distributional dynamics of semantic interference in word production: Evidence from computational simulations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
-
Van Paridon, J., Roelofs, A., & Meyer, A. S. (2019). Contextual priming in shadowing and simultaneous translation. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
-
Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Effects of modality on learning novel word - picture associations. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2019-01-03 - 2019-01-04.
Abstract
It is unknown whether modality affects the efficiency with which we learn novel word forms and their meanings. In this study, 60 participants were trained on 24 pseudowords, each paired with a pictorial meaning (novel object). Following a 20 minute filler task participants were tested on their ability to identify the picture-word form pairs on which they were trained when presented amongst foils. Word forms were presented in either their written or spoken form, with exposure to the written form equal to the speech duration of the spoken form. The between subjects design generated four participant groups 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. Our results show a written training advantage: participants trained on written words were more accurate on the matching task. An ongoing follow-up experiment tests whether the written advantage is caused by additional time with the full word form, given that words can be read faster than the time taken for the spoken form to unfold. To test this, in training, written words were presented with sufficient time for participants to read, yet maximally half the duration of the spoken form in experiment 1. -
Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Modality effects in novel picture-word form associations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
Abstract
It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials. -
Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. Talk presented at the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019). Montreal, Canada. 2019-07-24 - 2019-07-27.
Abstract
It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials. -
Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Naming pictures slowly facilitates memory for their names. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
Abstract
Studies on the generation effect have found that coming up with words, compared to reading them, improves memory. However, because these studies used words at both study and test, it is unclear whether generation affects visual or conceptual/lexical representations. Here, participants named pictures after hearing the picture name (no-generation condition), backward speech, or an unrelated word (easy and harder generation conditions). We ruled out effects at the visual level by testing participants’ recognition memory on the written names of the pictures that were named earlier. We also assessed the effect of processing time during generation on memory. In the recognition memory test, participants were more accurate in the generation conditions than in the no-generation condition. They were also more accurate for words that took longer to be retrieved, but only when generation was required. This work shows that generation affects conceptual/lexical representations and informs the relationship between language and memory. -
Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.
Abstract
Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia. -
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
-
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-25 - 2016-05-27.
-
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension. -
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
-
Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2016). The contribution of vocabulary size to language processing: Evidence from lexical decision and picture-word interference. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Previous research indicates that general cognitive abilities, such as attention or executive control, contribute to language processing (Hartsuiker & Barkhuysen, 2006; Jongman et al., 2014; Shao et al., 2013). Potential effects of language-specific abilities, such as vocabulary, on language processing in adult native speakers have been examined less extensively. Goals: a) develop and assess measures of vocabulary size in Dutch native speakers, and b) investigate the relationship between individual differences in vocabulary and language processing. -
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker affects perception of another talker. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-24 - 2016-11-25.
-
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. It has been suggested that this process of speech rate normalization occurs across talker changes. This would predict that the speech rate of talker A influences perception of the rate of another talker B. We assessed this hypothesis by testing effects of speech rate on the perception on the Dutch vowel continuum /A/-/a:/. One participant group was exposed to 'neutral' speech from talker A intermixed with fast speech from talker B. Another group listened to the same speech from talker A, but to slow speech from talker B. We observed a difference in perception of talker A depending on the speech rate of talker B: A's 'neutral' speech was perceived as slow when B spoke faster. These findings corroborate the idea that speech rate normalization occurs across talkers, but they challenge the assumption that listeners average over speech rates from multiple talkers. Instead, they suggest that listeners contrast talker-specific rates. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Talk presented at MPI Proudly Presents. Nijmegen, The Netherlands. 2016-06-01.
-
McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the Psychology Department, University of Geneva. Geneva, Italy. 2016-05-09.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.
Abstract
Natural conversations are characterized by smooth transitions of turns between interlocutors. For instance, speakers often respond to questions or requests within half a second. As planning the first word of an utterance can easily take a second or more, this suggests that utterance planning often overlaps with listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has recently been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. However, direct evidence for simultaneous listening and speech planning is scarce. I will first review studies demonstrating that both comprehending spoken utterances and planning them require processing capacity and that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the second part of the talk I will turn to studies examining directly when utterance planning in dialogue begins. These studies indicate that (regrettably) there are probably no hard-and-fast rules for the temporal coordination of listening and speech planning. I will argue that (regrettably again) we need models that are far more complex than Levinson and Torreira's proposal to understand how listening and speech planning are coordinated in conversation -
Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.
Share this page