Displaying 1 - 42 of 42
-
Araújo, S., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Effects of verb position on sentence planning. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2018). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). Does literacy predict individual differences in the syntactic processing of spoken language?. Poster presented at the 1st Workshop on Cognitive Science of Culture, Lisbon, Portugal.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). Does reading ability predict individual differences in spoken language syntactic processing?. Poster presented at the International Meeting of the Psychonomics Society 2018, Amsterdam, The Netherlands.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). How does literacy influence syntactic processing in spoken language?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Gent, Belgium. 2018-06-04 - 2018-06-05.
-
Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Individual differences in word production: Evidence from students with diverse educational backgrounds. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., Damian, M., Schröder, S., Brysbaert, M., McQueen, J. M., & Meyer, A. S. (2018). STAIRS4WORDS: A new adaptive test for assessing receptive vocabulary size in English, Dutch, and German. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Verbal and non-verbal predictors of word comprehension and word production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2018). Evidence for in-group biases in source memory for newly learned words. Poster presented at the International Conference on Learning and Memory (LearnMem 2018), Huntington Beach, CA, USA.
-
Jongman, S. R., Piai, V., & Meyer, A. S. (2018). Withholding speech: Does the EEG signal reflect planning for production or attention?. Poster presented at the 31st Annual CUNY Conference on Human Sentence Processing, Davis, CA, USA.
-
Mainz, N., Smith, A. C., & Meyer, A. S. (2018). Individual differences in word learning - An exploratory study of adult native speakers. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2018-01-03 - 2018-01-05.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Do effects of habitual speech rate normalization on perception extend to self?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.
Abstract
Listeners are known to use contextual speech rate in processing temporally ambiguous speech sounds. For instance, a fast adjacent speech context makes a vowel sound relatively long, whereas a slow context makes it sound relatively short (Reinisch & Sjerps, 2013). Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Reinisch, 2016; Maslowski et al., in press). However, effects of one’s own speech rate on the perception of another talker’s speech are yet unexplored. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments tested the contribution of self-produced speech on perception of the habitual speech rate of another talker. In Experiment 1, one group of participants was instructed to speak fast (high-rate group), whereas another group had to speak slowly (low-rate group; 16 participants per group). The two groups were compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who did not know the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception and the link between production and perception in dialogue settings. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). How speech rate normalization affects lexical access. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2018). Berlin, Germany. 2018-09-06 - 2018-09-08.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Self-produced speech rate is processed differently from other talkers' rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
Abstract
Interlocutors perceive phonemic category boundaries relative to talkers’ produced speech rates. For instance, a temporally ambiguous vowel between Dutch short /A/ and long /a:/ sounds short (i.e., as /A/) in a slow speech context, but long in a fast context. Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Maslowski et al., in press). However, it is yet unclear whether self-produced speech rate modulates perception of another talker’s habitual rate. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly (16 participants per group). The two groups were then compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who were unfamiliar with the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the link between production and perception in dialogue. -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Network structure and the cultural evolution of linguistic structure: An artificial language study. Talk presented at the Cultural Evolution Society Conference (CES 2018). Tempe, AZ, USA. 2018-10-22 - 2018-10-24.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Linguistics Department Colloquium Series. University of Arizona, Tuscon, AZ, USA. 2018-10-26.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Language evolution seminar, Center for Language evolution, University of Edinburgh. Edinburgh, UK. 2018-08-21.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Cohn Institute for the History and Philosophy of Science, Tel Aviv University. Tel Aviv, Israel. 2018-12-23.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Department of Linguistics, Tel Aviv University. Tel Aviv, Israel. 2018-12-25.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Donders Discussions 2018. Nijmegen, The Netherlands. 2018-10-11.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. Talk presented at the 12th International Conference on the Evolution of Language: (EVOLANG XII). Torun, Poland. 2018-04-15 - 2018-04-19.
-
Rodd, J., Bosker, H. R., Meyer, A. S., Ernestus, M., & Ten Bosch, L. (2018). How to speed up and slow down: Speaking rate control to the level of the syllable. Talk presented at the New Observations in Speech and Hearing seminar series, Institute of Phonetics and Speech processing, LMU Munich. Munich, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Run-speaking? Simulations of rate control in speech production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Running or speed-walking? Simulations of speech production at different rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
Abstract
That speakers can vary their speaking rate is evident, but how they accomplish this has
hardly been studied. The effortful experience of deviating from one's preferred speaking rate
might result from shifting between different regimes (system configurations) of the speech
planning system. This study investigates control over speech rate through simulations of a
new connectionist computational model of the cognitive process of speech production, derived
from Dell, Burger and Svec’s (1997) model to fit the temporal characteristics of observed
speech. We draw an analogy from human movement: the selection of walking and running
gaits to achieve different movement speeds. Are the regimes of the speech production system
arranged into multiple ‘gaits’ that resemble walking and running?
During training of the model, different parameter settings are identified for different speech
rates, which can be conflated with the regimes of the speech production system. The
parameters can be considered to be dimensions of a high-dimensional ‘regime space’, in
which different regimes occupy different parts of the space.
In a single gait system, the regimes are qualitatively similar, but quantitatively different.
They are arranged along a straight line through regime space. Different points along this axis
correspond directly to different speaking rates. In a multiple gait system, the arrangement of
the regimes is more disperse, with no obvious relationship between the regions associated
with each gait.
After training, the model achieved good fits in all three speaking rates, and the parameter
settings associated with each speaking rate were different. The broad arrangement of the
parameter settings for the different speaking rates in regime space was non-axial, suggesting
that ‘gaits’ may be present in the speech planning system. -
Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2018). To speed up, turn up the gain: Acoustic evidence of a 'gain-strategy' for speech planning in accelerated and decelerated speech. Poster presented at LabPhon16 - Variation, development and impairment: Between phonetics and phonology, Lisbon, Portugal.
-
Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Lexical and syntactic memory representations for sentence production: Effects of lexicality and verb arguments. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Producing sentences in the MRI scanner: Effects of lexicality and verb arguments. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.
-
Taschenberger, L., Brehm, L., & Meyer, A. S. (2018). Interference in joint picture naming. Poster presented at the IMPRS Conference on Interdisciplinary Approaches in the Language Sciences, Nijmegen, The Netherlands.
Abstract
In recent years, the theory that prediction is an important part of language processing has gained considerable attention (see Huettig, 2015, for overview). There is a large body of empirical evidence which suggests that language users’ ability to anticipate interlocutors’ upcoming utterances is one of the reasons why interactive speech can be so effortless, smooth, and efficient in nature (e.g. Wicha et al., 2004; van Berkum et al., 2005). The present study aimed to investigate whether the language production module is affected by prediction of another individual’s utterances using a joint language production task designed to establish whether simulation of an interlocutor’s utterance occurs automatically, even if this hinders one’s own speech production. The experiment aimed to replicate the finding of an interference effect in a joint naming task (Gambi et al., 2015), and investigate whether the same patterns could be found within a clearer social context where a partner was co-present in the same room. Participants named pictures of objects while their partners concurrently named or categorised congruent or incongruent stimuli. Analyses of naming onset latencies indicate that individuals may partially co-represent their partner’s utterances and that this shared representation influences language production. Congruency in task and stimuli display facilitated naming compared to incongruent trials which showed a tendency to impede production latencies. This finding of a social effect in a setting where simulation of language content is not necessary may suggest that some kind of anticipatory processing is an underlying feature of comprehension. -
Zormpa, E., Hoedemaker, R. S., Brehm, L., & Meyer, A. S. (2018). The production and generation effect in picture naming: How lexical access and articulation influence memory. Poster presented at the Experimental Psychology Society London Meeting, London, UK.
Abstract
Previous work on memory phenomena shows that pictures and words lead to a production effect, i.e. better memory for aloud than silent items, and that this interacts with the picture superiority effect, i.e. better memory for pictures than words (Fawcett, Quinlan and Taylor, 2012). We investigated the role of the generation effect, i.e. improved memory for generated words, in picture naming. As picture naming requires participants to think of an appropriate label, a generation effect might be elicited for pictures but not words. Forty-two participants named pictures silently or aloud and were given the correct picture name or an unreadable label; all conditions included pictures to control for the picture superiority effect. Memory was then tested using a yes/no recognition task. We found a production effect (p < 0.001) showing the role of articulation in memory, a generation effect (p < 0.001) showing the role of lexical access in memory, and an interaction (p <0.05) between the two suggesting the non-independence of the effects. Ongoing work further tests the role of label reliability in eliciting these effects. This research demonstrates a role for the generation effect in picture naming, with implications for memory asymmetries at different stages in language production.Additional information
link to poster on figshare -
Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.
Abstract
Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia. -
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
-
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-25 - 2016-05-27.
-
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension. -
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
-
Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2016). The contribution of vocabulary size to language processing: Evidence from lexical decision and picture-word interference. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Previous research indicates that general cognitive abilities, such as attention or executive control, contribute to language processing (Hartsuiker & Barkhuysen, 2006; Jongman et al., 2014; Shao et al., 2013). Potential effects of language-specific abilities, such as vocabulary, on language processing in adult native speakers have been examined less extensively. Goals: a) develop and assess measures of vocabulary size in Dutch native speakers, and b) investigate the relationship between individual differences in vocabulary and language processing. -
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker affects perception of another talker. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-24 - 2016-11-25.
-
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. It has been suggested that this process of speech rate normalization occurs across talker changes. This would predict that the speech rate of talker A influences perception of the rate of another talker B. We assessed this hypothesis by testing effects of speech rate on the perception on the Dutch vowel continuum /A/-/a:/. One participant group was exposed to 'neutral' speech from talker A intermixed with fast speech from talker B. Another group listened to the same speech from talker A, but to slow speech from talker B. We observed a difference in perception of talker A depending on the speech rate of talker B: A's 'neutral' speech was perceived as slow when B spoke faster. These findings corroborate the idea that speech rate normalization occurs across talkers, but they challenge the assumption that listeners average over speech rates from multiple talkers. Instead, they suggest that listeners contrast talker-specific rates. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Talk presented at MPI Proudly Presents. Nijmegen, The Netherlands. 2016-06-01.
-
McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the Psychology Department, University of Geneva. Geneva, Italy. 2016-05-09.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.
Abstract
Natural conversations are characterized by smooth transitions of turns between interlocutors. For instance, speakers often respond to questions or requests within half a second. As planning the first word of an utterance can easily take a second or more, this suggests that utterance planning often overlaps with listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has recently been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. However, direct evidence for simultaneous listening and speech planning is scarce. I will first review studies demonstrating that both comprehending spoken utterances and planning them require processing capacity and that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the second part of the talk I will turn to studies examining directly when utterance planning in dialogue begins. These studies indicate that (regrettably) there are probably no hard-and-fast rules for the temporal coordination of listening and speech planning. I will argue that (regrettably again) we need models that are far more complex than Levinson and Torreira's proposal to understand how listening and speech planning are coordinated in conversation -
Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.
Share this page