Displaying 101 - 200 of 262
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at the 3rd Phonetics and Phonology in Europe conference (PaPe 2019), Lecce, Italy.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
Abstract
It is evident that speakers can freely vary stylistic features of their speech, such as speech rate, but how they accomplish this has hardly been studied, let alone implemented in a formal model of speech production. Much as in walking and running, where qualitatively different gaits are required cover the gamut of different speeds, we might predict there to be multiple qualitatively distinct configurations, or ‘gaits’, in the speech planning system that speakers must switch between to alter their speaking rate or style. Alternatively, control might involve continuous modulation of a single ‘gait’. We investigate these possibilities by simulation of a connectionist computational model which mimics the temporal characteristics of observed speech. Different ‘regimes’ (combinations of parameter settings) can be engaged to achieve different speaking rates.
The model was trained separately for each speaking rate, by an evolutionary optimisation algorithm. The training identified parameter values that resulted in the model to best approximate syllable duration distributions characteristic of each speaking rate.
In one gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In parameter space, they would be arranged along a straight line. Different points along this axis correspond to different speaking rates. In a multiple gait system, this linearity would be missing. Instead, the arrangement of the regimes would be triangular, with no obvious relationship between the regions associated with each gait, and an abrupt shift in parameter values to move from speeds associated with ‘walk-speaking’ to ‘run-speaking’.
Our model achieved good fits in all three speaking rates. In parameter space, the arrangement of the parameter settings selected for the different speaking rates is non-axial, suggesting that ‘gaits’ are present in the speech planning system. -
San Jose, A., Roelofs, A., & Meyer, A. S. (2019). Lapses of attention explain the distributional dynamics of semantic interference in word production: Evidence from computational simulations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
-
Van Paridon, J., Roelofs, A., & Meyer, A. S. (2019). Contextual priming in shadowing and simultaneous translation. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
-
Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Effects of modality on learning novel word - picture associations. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2019-01-03 - 2019-01-04.
Abstract
It is unknown whether modality affects the efficiency with which we learn novel word forms and their meanings. In this study, 60 participants were trained on 24 pseudowords, each paired with a pictorial meaning (novel object). Following a 20 minute filler task participants were tested on their ability to identify the picture-word form pairs on which they were trained when presented amongst foils. Word forms were presented in either their written or spoken form, with exposure to the written form equal to the speech duration of the spoken form. The between subjects design generated four participant groups 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. Our results show a written training advantage: participants trained on written words were more accurate on the matching task. An ongoing follow-up experiment tests whether the written advantage is caused by additional time with the full word form, given that words can be read faster than the time taken for the spoken form to unfold. To test this, in training, written words were presented with sufficient time for participants to read, yet maximally half the duration of the spoken form in experiment 1. -
Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Modality effects in novel picture-word form associations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
Abstract
It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials. -
Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. Talk presented at the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019). Montreal, Canada. 2019-07-24 - 2019-07-27.
Abstract
It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials. -
Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Naming pictures slowly facilitates memory for their names. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
Abstract
Studies on the generation effect have found that coming up with words, compared to reading them, improves memory. However, because these studies used words at both study and test, it is unclear whether generation affects visual or conceptual/lexical representations. Here, participants named pictures after hearing the picture name (no-generation condition), backward speech, or an unrelated word (easy and harder generation conditions). We ruled out effects at the visual level by testing participants’ recognition memory on the written names of the pictures that were named earlier. We also assessed the effect of processing time during generation on memory. In the recognition memory test, participants were more accurate in the generation conditions than in the no-generation condition. They were also more accurate for words that took longer to be retrieved, but only when generation was required. This work shows that generation affects conceptual/lexical representations and informs the relationship between language and memory. -
Araújo, S., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Effects of verb position on sentence planning. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2018). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). Does literacy predict individual differences in the syntactic processing of spoken language?. Poster presented at the 1st Workshop on Cognitive Science of Culture, Lisbon, Portugal.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). Does reading ability predict individual differences in spoken language syntactic processing?. Poster presented at the International Meeting of the Psychonomics Society 2018, Amsterdam, The Netherlands.
-
Favier, S., Meyer, A. S., & Huettig, F. (2018). How does literacy influence syntactic processing in spoken language?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Gent, Belgium. 2018-06-04 - 2018-06-05.
-
Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Individual differences in word production: Evidence from students with diverse educational backgrounds. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., Damian, M., Schröder, S., Brysbaert, M., McQueen, J. M., & Meyer, A. S. (2018). STAIRS4WORDS: A new adaptive test for assessing receptive vocabulary size in English, Dutch, and German. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Verbal and non-verbal predictors of word comprehension and word production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2018). Evidence for in-group biases in source memory for newly learned words. Poster presented at the International Conference on Learning and Memory (LearnMem 2018), Huntington Beach, CA, USA.
-
Jongman, S. R., Piai, V., & Meyer, A. S. (2018). Withholding speech: Does the EEG signal reflect planning for production or attention?. Poster presented at the 31st Annual CUNY Conference on Human Sentence Processing, Davis, CA, USA.
-
Mainz, N., Smith, A. C., & Meyer, A. S. (2018). Individual differences in word learning - An exploratory study of adult native speakers. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2018-01-03 - 2018-01-05.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Do effects of habitual speech rate normalization on perception extend to self?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.
Abstract
Listeners are known to use contextual speech rate in processing temporally ambiguous speech sounds. For instance, a fast adjacent speech context makes a vowel sound relatively long, whereas a slow context makes it sound relatively short (Reinisch & Sjerps, 2013). Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Reinisch, 2016; Maslowski et al., in press). However, effects of one’s own speech rate on the perception of another talker’s speech are yet unexplored. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments tested the contribution of self-produced speech on perception of the habitual speech rate of another talker. In Experiment 1, one group of participants was instructed to speak fast (high-rate group), whereas another group had to speak slowly (low-rate group; 16 participants per group). The two groups were compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who did not know the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception and the link between production and perception in dialogue settings. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). How speech rate normalization affects lexical access. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2018). Berlin, Germany. 2018-09-06 - 2018-09-08.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Self-produced speech rate is processed differently from other talkers' rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
Abstract
Interlocutors perceive phonemic category boundaries relative to talkers’ produced speech rates. For instance, a temporally ambiguous vowel between Dutch short /A/ and long /a:/ sounds short (i.e., as /A/) in a slow speech context, but long in a fast context. Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Maslowski et al., in press). However, it is yet unclear whether self-produced speech rate modulates perception of another talker’s habitual rate. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly (16 participants per group). The two groups were then compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who were unfamiliar with the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the link between production and perception in dialogue. -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Network structure and the cultural evolution of linguistic structure: An artificial language study. Talk presented at the Cultural Evolution Society Conference (CES 2018). Tempe, AZ, USA. 2018-10-22 - 2018-10-24.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Linguistics Department Colloquium Series. University of Arizona, Tuscon, AZ, USA. 2018-10-26.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Language evolution seminar, Center for Language evolution, University of Edinburgh. Edinburgh, UK. 2018-08-21.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Cohn Institute for the History and Philosophy of Science, Tel Aviv University. Tel Aviv, Israel. 2018-12-23.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Department of Linguistics, Tel Aviv University. Tel Aviv, Israel. 2018-12-25.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Donders Discussions 2018. Nijmegen, The Netherlands. 2018-10-11.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. Talk presented at the 12th International Conference on the Evolution of Language: (EVOLANG XII). Torun, Poland. 2018-04-15 - 2018-04-19.
-
Rodd, J., Bosker, H. R., Meyer, A. S., Ernestus, M., & Ten Bosch, L. (2018). How to speed up and slow down: Speaking rate control to the level of the syllable. Talk presented at the New Observations in Speech and Hearing seminar series, Institute of Phonetics and Speech processing, LMU Munich. Munich, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Run-speaking? Simulations of rate control in speech production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Running or speed-walking? Simulations of speech production at different rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
Abstract
That speakers can vary their speaking rate is evident, but how they accomplish this has
hardly been studied. The effortful experience of deviating from one's preferred speaking rate
might result from shifting between different regimes (system configurations) of the speech
planning system. This study investigates control over speech rate through simulations of a
new connectionist computational model of the cognitive process of speech production, derived
from Dell, Burger and Svec’s (1997) model to fit the temporal characteristics of observed
speech. We draw an analogy from human movement: the selection of walking and running
gaits to achieve different movement speeds. Are the regimes of the speech production system
arranged into multiple ‘gaits’ that resemble walking and running?
During training of the model, different parameter settings are identified for different speech
rates, which can be conflated with the regimes of the speech production system. The
parameters can be considered to be dimensions of a high-dimensional ‘regime space’, in
which different regimes occupy different parts of the space.
In a single gait system, the regimes are qualitatively similar, but quantitatively different.
They are arranged along a straight line through regime space. Different points along this axis
correspond directly to different speaking rates. In a multiple gait system, the arrangement of
the regimes is more disperse, with no obvious relationship between the regions associated
with each gait.
After training, the model achieved good fits in all three speaking rates, and the parameter
settings associated with each speaking rate were different. The broad arrangement of the
parameter settings for the different speaking rates in regime space was non-axial, suggesting
that ‘gaits’ may be present in the speech planning system. -
Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2018). To speed up, turn up the gain: Acoustic evidence of a 'gain-strategy' for speech planning in accelerated and decelerated speech. Poster presented at LabPhon16 - Variation, development and impairment: Between phonetics and phonology, Lisbon, Portugal.
-
Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Lexical and syntactic memory representations for sentence production: Effects of lexicality and verb arguments. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
-
Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Producing sentences in the MRI scanner: Effects of lexicality and verb arguments. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.
-
Taschenberger, L., Brehm, L., & Meyer, A. S. (2018). Interference in joint picture naming. Poster presented at the IMPRS Conference on Interdisciplinary Approaches in the Language Sciences, Nijmegen, The Netherlands.
Abstract
In recent years, the theory that prediction is an important part of language processing has gained considerable attention (see Huettig, 2015, for overview). There is a large body of empirical evidence which suggests that language users’ ability to anticipate interlocutors’ upcoming utterances is one of the reasons why interactive speech can be so effortless, smooth, and efficient in nature (e.g. Wicha et al., 2004; van Berkum et al., 2005). The present study aimed to investigate whether the language production module is affected by prediction of another individual’s utterances using a joint language production task designed to establish whether simulation of an interlocutor’s utterance occurs automatically, even if this hinders one’s own speech production. The experiment aimed to replicate the finding of an interference effect in a joint naming task (Gambi et al., 2015), and investigate whether the same patterns could be found within a clearer social context where a partner was co-present in the same room. Participants named pictures of objects while their partners concurrently named or categorised congruent or incongruent stimuli. Analyses of naming onset latencies indicate that individuals may partially co-represent their partner’s utterances and that this shared representation influences language production. Congruency in task and stimuli display facilitated naming compared to incongruent trials which showed a tendency to impede production latencies. This finding of a social effect in a setting where simulation of language content is not necessary may suggest that some kind of anticipatory processing is an underlying feature of comprehension. -
Zormpa, E., Hoedemaker, R. S., Brehm, L., & Meyer, A. S. (2018). The production and generation effect in picture naming: How lexical access and articulation influence memory. Poster presented at the Experimental Psychology Society London Meeting, London, UK.
Abstract
Previous work on memory phenomena shows that pictures and words lead to a production effect, i.e. better memory for aloud than silent items, and that this interacts with the picture superiority effect, i.e. better memory for pictures than words (Fawcett, Quinlan and Taylor, 2012). We investigated the role of the generation effect, i.e. improved memory for generated words, in picture naming. As picture naming requires participants to think of an appropriate label, a generation effect might be elicited for pictures but not words. Forty-two participants named pictures silently or aloud and were given the correct picture name or an unreadable label; all conditions included pictures to control for the picture superiority effect. Memory was then tested using a yes/no recognition task. We found a production effect (p < 0.001) showing the role of articulation in memory, a generation effect (p < 0.001) showing the role of lexical access in memory, and an interaction (p <0.05) between the two suggesting the non-independence of the effects. Ongoing work further tests the role of label reliability in eliciting these effects. This research demonstrates a role for the generation effect in picture naming, with implications for memory asymmetries at different stages in language production.Additional information
link to poster on figshare -
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at Psycholinguistics in Flanders (PiF 2017), Leuven, Belgium.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at the Experimental Psychology Society Belfast Meeting, Belfast, UK.
Abstract
Conversation often involves simultaneous production and comprehension, yet little research has investigated whether these two processes interfere with one another. We tested participants’ ability to dual-task with production and comprehension tasks. Task one (production task) was picture naming. Task two (comprehension task) was either syllable identification (linguistic condition) or tone identification (non-linguistic condition). The two identification tasks were matched for difficulty. Three SOAs (50ms, 300ms, and 1800ms) resulted in different amounts of overlap between the production and comprehension tasks. We hypothesized that as production and comprehension use similar resources there would be greater interference with concurrent linguistic than non-linguistic tasks.
At the 50ms SOA, picture naming latencies were slower in the linguistic compared to the non-linguistic condition, suggesting that the resources required for production and comprehension overlap more in the linguistic condition. As the syllables were non-words without lexical representations, this interference likely occurs primarily at the phonological level. Across all SOAs, identification RTs were longer in the linguistic condition, showing that such phonological interference percolates through to the comprehension task, regardless of SOA. In sum, these results demonstrate that concurrent access to the phonological level in production and comprehension results in measurable interference in both speaking and comprehending.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Hoedemaker, R. S., & Meyer, A. S. (2017). Coordination and preparation of utterances in a joint-naming task. Talk presented at the Experimental Psychology Society Belfast Meeting. Belfast, UK. 2017-04-10 - 2017-04-12.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). “That’s a spatelhouder!”: How source memory is influenced by speakers’ social categories in a word-learning paradigm. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). Speakers' social identity affects source memory for novel words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Jongman, S. R., Meyer, A. S., & Piai, V. (2017). Brain signature of planning for production: An EEG study. Talk presented at the Workshop 'Revising formal semantic and pragmatic theories from a neurocognitive perspective'. Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Jongman, S. R., & Meyer, A. S. (2017). Simultaneous listening and planning for production: Full or partial comprehension?. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). When slow speech sounds fast: How the speech rate of one talker influences perception of another talker. Talk presented at the IPS workshop: Abstraction, Diversity, and Speech Dynamics. Herrsching am Ammersee, Germany. 2017-05-03 - 2017-05-05.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. This process of rate-dependent speech perception has been suggested to occur across talker changes, with the speech rate of talker A influencing perception of talker B. This study tested whether a ‘global’ speech rate calculated over multiple talkers and over a longer period of time affected perception of the temporal Dutch vowel contrast /ɑ/-/a:/. First, Experiment 1 demonstrated that listeners more often reported hearing long /a:/ in fast contexts than in ‘neutral rate’ contexts, replicating earlier findings. Then, in Experiment 2, one participant group was exposed to ‘neutral’ speech from talker A intermixed with slow speech from talker B. Another group listened to the same ‘neutral’ speech from talker A, but to fast speech from talker B. Between-group comparison in the ‘neutral’ condition revealed that Group 1 reported more long /a:/ than Group 2, indicating that A’s ‘neutral’ speech sounded faster when B was slower. Finally, Experiment 3 tested whether talking at slow or fast rates oneself elicits the same ‘global’ rate effects. However, no evidence was found that self-produced speech modulated perception of talker A. This study corroborates the idea that ‘global’ rate-dependent effects occur across talkers, but are insensitive to one’s own speech rate. Results are interpreted in light of the general auditory mechanisms thought to underlie rate normalization, with implications for our understanding of dialogue.Additional information
http://www.phonetik.uni-muenchen.de/institut/veranstaltungen/abstraction-divers… -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech affects perception depends on who is talking. Poster presented at the Donders Poster Sessions, Nijmegen, The Netherlands.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /A/-/a:/. In Experiment 1, one low-rate group listened to ‘neutral’ rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the ‘neutral’ trials revealed that the low-rate group reported a higher proportion of /a:/ in A’s ‘neutral’ speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one’s own speech rate also contributes to effects of long-term tracking of rate. Here, talker B’s speech was replaced by playback of participants’ own fast or slow speech. No evidence was found that one’s own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. Poster presented at Interspeech 2017, Stockholm, Sweden.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Meyer, A. S., Decuyper, C., & Coopmans, C. W. (2017). Distribution of attention in question-answer sequences: Evidence for limited parallel processing. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2017-01-03 - 2017-01-06.
-
Meyer, A. S. (2017). Towards understanding conversation: A psycholinguist's perspective. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the Inaugural Cultural Evolution Society Conference (CESC 2017). Jena, Germany. 2017-09-13 - 2017-09-15.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the 30th Annual CUNY Conference on Human Sentence Processing. Cambridge, MA, USA. 2017-03-30 - 2017-04-01.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). The role of community size in the emergence of linguistic structure. Talk presented at XLanS: Triggers of language change in the Language Sciences. Lyon, France. 2017-10-11 - 2017-10-13.
-
Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2017). How we regulate speech rate: Phonetic evidence for a 'gain strategy' in speech planning. Poster presented at the Abstraction, Diversity and Speech Dynamics Workshop, Herrsching, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2017). Simulating speaking rate control: A spreading activation model of syllable timing. Poster presented at the Workshop Conversational speech and lexical representations, Nijmegen, The Netherlands.
Abstract
Speech can be produced at different rates. The ability to produce faster or slower speech may be thought to result from executive control processes enlisted to modulate lexical selection and phonological encoding stages of speech planning.
This study used simulations of the model of serial order in language by Dell, Burger and Svec (1997, DBS) to characterise the strategies adopted by speakers when naming pictures at fast, medium and slow prescribed rates. Our new implementation of DBS was able to produce activation patterns that correlated strongly with observed syllable-level timing of disyllabic words from this task.
For each participant, different speaking rates were associated with different regions of the DBS parameter space. The precise placement of the speaking rates in the parameter space differed markedly between participants. Participants applied broadly the same parameter manipulation to accelerate their speech. This was however not the case for deceleration. Hierarchical clustering revealed two distinct patterns of parameter adjustment employed to decelerate speech, suggesting that deceleration is not necessarily achieved by the inverse process of acceleration. In addition, potential refinements to the DBS model are discussed. -
Shao, Z., & Meyer, A. S. (2017). How word and phrase frequencies affect noun phrase production. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2017). Combining Virtual Reality and EEG to study semantic and pragmatic processing in a naturalistic environment. Talk presented at the workshop 'Revising formal Semantic and Pragmatic theories from a Neurocognitive Perspective' (NeuroPragSem, 2017). Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Van Paridon, J., Roelofs, A., & Meyer, A. S. (2017). Coordinating simultaneous comprehension and production: Behavioral and modelling findings from shadowing and simultaneous interpreting. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Weber, K., Meyer, A. S., & Hagoort, P. (2017). Learning lexical-syntactic biases: An fMRI study on how we connect words and structures. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
-
Zormpa, E., Hoedemaker, R. S., Brehm, L., & Meyer, A. S. (2017). The production and generation effect in picture naming: How lexical access and articulation influence memory. Poster presented at the Donders Posters Session, Nijmegen, The Netherlands.
-
Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.
Abstract
Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia. -
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
-
Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2016). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-25 - 2016-05-27.
-
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension. -
Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
-
Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2016). The contribution of vocabulary size to language processing: Evidence from lexical decision and picture-word interference. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Previous research indicates that general cognitive abilities, such as attention or executive control, contribute to language processing (Hartsuiker & Barkhuysen, 2006; Jongman et al., 2014; Shao et al., 2013). Potential effects of language-specific abilities, such as vocabulary, on language processing in adult native speakers have been examined less extensively. Goals: a) develop and assess measures of vocabulary size in Dutch native speakers, and b) investigate the relationship between individual differences in vocabulary and language processing. -
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker affects perception of another talker. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-24 - 2016-11-25.
-
Maslowski, M., Bosker, H. R., & Meyer, A. S. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. It has been suggested that this process of speech rate normalization occurs across talker changes. This would predict that the speech rate of talker A influences perception of the rate of another talker B. We assessed this hypothesis by testing effects of speech rate on the perception on the Dutch vowel continuum /A/-/a:/. One participant group was exposed to 'neutral' speech from talker A intermixed with fast speech from talker B. Another group listened to the same speech from talker A, but to slow speech from talker B. We observed a difference in perception of talker A depending on the speech rate of talker B: A's 'neutral' speech was perceived as slow when B spoke faster. These findings corroborate the idea that speech rate normalization occurs across talkers, but they challenge the assumption that listeners average over speech rates from multiple talkers. Instead, they suggest that listeners contrast talker-specific rates. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2016). Slow speech can sound fast: How the speech rate of one talker has a contrastive effect on the perception of another talker. Talk presented at MPI Proudly Presents. Nijmegen, The Netherlands. 2016-06-01.
-
McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the Psychology Department, University of Geneva. Geneva, Italy. 2016-05-09.
-
Meyer, A. S. (2016). Utterance planning and resource allocation in dialogue. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.
Abstract
Natural conversations are characterized by smooth transitions of turns between interlocutors. For instance, speakers often respond to questions or requests within half a second. As planning the first word of an utterance can easily take a second or more, this suggests that utterance planning often overlaps with listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has recently been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. However, direct evidence for simultaneous listening and speech planning is scarce. I will first review studies demonstrating that both comprehending spoken utterances and planning them require processing capacity and that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the second part of the talk I will turn to studies examining directly when utterance planning in dialogue begins. These studies indicate that (regrettably) there are probably no hard-and-fast rules for the temporal coordination of listening and speech planning. I will argue that (regrettably again) we need models that are far more complex than Levinson and Torreira's proposal to understand how listening and speech planning are coordinated in conversation -
Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
Abstract
We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase.
Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods.
Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information. -
Hintz, F., Meyer, A. S., & Huettig, F. (2015). Context-dependent employment of mechanisms in anticipatory language processing. Talk presented at the 15th NVP Winter Conference. Egmond aan Zee, The Netherlands. 2015-12-17 - 2015-12-19.
-
Hintz, F., Meyer, A. S., & Huettig, F. (2015). Doing a production task encourages prediction: Evidence from interleaved object naming and sentence reading. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).
Abstract
Prominent theories of predictive language processing assume that language production processes are used to anticipate upcoming linguistic input during comprehension (Dell & Chang, 2014; Pickering & Garrod, 2013). Here, we explored the converse case: Does a task set including production in addition to comprehension encourage prediction, compared to a task only including comprehension? To test this hypothesis, participants carried out a cross-modal naming task (Exp 1a), a self-paced reading task (Exp1 b) that did not include overt production, and a task (Exp 1c) in which naming and reading trials were evenly interleaved. We used the same predictable (N = 40) and non-predictable (N = 40) sentences in all three tasks. The sentences consisted of a fixed agent, a transitive verb and a predictable or non-predictable target word (The man breaks a glass vs. The man borrows a glass). The mean cloze probability in the predictable sentences was .39 (ranging from .06 to .8; zero in the non-predictable sentences). A total of 162 volunteers took part in the experiment which was run in a between-participants design. In Exp 1a, fifty-four participants listened to recordings of the sentences which ended right before the spoken target word. Coinciding with the end of the playback, a picture of the target word was shown which the participants were asked to name as fast as possible. Analyses of their naming latencies revealed a statistically significant naming advantage of 108 ms on predictable over non-predictable trials. Moreover, we found that the objects’ naming advantage was predicted by the target words’ cloze probability in the sentences (r = .347, p = .038). In Exp 1b, 54 participants were asked to read the same sentences in a self-paced fashion. To allow for testing of potential spillover effects, we added a neutral prepositional phrase (breaks a glass from the collection/borrows a glass from the neighbor) to each sentence. The sentences were read word-by-word, advancing by pushing the space bar. On 30% of the trials, comprehension questions were used to keep up participants' focus on comprehending the sentences. Analyses of their spillover region reading times revealed a numerical advantage (8 ms; tspillover = -1.1, n.s.) in the predictable as compared to the non-predictable condition. Importantly, the analysis of participants' responses to the comprehension questions, showed that they understood the sentences (mean accuracy = 93%). In Exp 1c, the task comprised 50% naming trials and 50% reading trials which appeared in random order. Fifty-four participants named and read the same objects and sentences as in the previous versions. The results showed a naming advantage on predictable over non-predictable items (99 ms) and a positive correlation between the items’ cloze probability and their naming advantage (r = .322, p = .055). Crucially, the post-target reading time analysis showed that with naming trials and reading trials interleaved, there was also a statistically reliable prediction effect on reading trials. Participants were 19 ms faster at reading the spillover region on predictable relative to non-predictable items (tspillover = -2.624). To summarize, although we used the same sentences in all sub-experiments, we observed effects of prediction only when the task set involved production. In the reading only experiment (Exp 1b), no evidence for anticipation was obtained although participants clearly understood the sentences and the same sentences yielded reading facilitation when interleaved with naming trials (Exp 1c). This suggests that predictive language processing can be modulated by the comprehenders’ task set. When the task set involves language production, as is often the case in natural conversation, comprehenders appear to engage in prediction to a stronger degree than in pure comprehension tasks. In our discussion, we will discuss the notion that language production may engage prediction, because being able to predict words another person is about to say might optimize the comprehension process and enable smooth turn-taking. -
Hintz, F., Meyer, A. S., & Huettig, F. (2015). Event knowledge and word associations jointly influence predictive processing during discourse comprehension. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).
Abstract
A substantial body of literature has shown that readers and listeners often anticipate information. An open question concerns the mechanisms underlying predictive language processing. Multiple mechanisms have been suggested. One proposal is that comprehenders use event knowledge to predict upcoming words. Other theoretical frameworks propose that predictions are made based on simple word associations. In a recent EEG study, Metusalem and colleagues reported evidence for the modulating influence of event knowledge on prediction. They examined the degree to which event knowledge is activated during sentence comprehension. Their participants read two sentences, establishing an event scenario, which were followed by a final sentence containing one of three target words: a highly expected word, a semantically unexpected word that was related to the described event, or a semantically unexpected and event-unrelated word (see Figure, for an example). Analyses of participants’ ERPs elicited by the target words revealed a three-way split with regard to the amplitude of the N400 elicited by the different types of target: the expected targets elicited the smallest N400, the unexpected and event-unrelated targets elicited the largest N400. Importantly, the amplitude of the N400 elicited by the unexpected but event-related targets was significantly attenuated relative to the amplitude of the N400 elicited by the unexpected and event-unrelated targets. Metusalem et al. concluded that event knowledge is immediately available to constrain on-line language processing. Based on a post-hoc analysis, the authors rejected the possibility that the results could be explained by simple word associations. In the present study, we addressed the role of simple word associations in discourse comprehension more directly. Specifically, we explored the contribution of associative priming to the graded N400 pattern seen in Metusalem et al’s study. We conducted two EEG experiments. In Experiment 1, we reran Metusalem and colleagues’ context manipulation and closely replicated their results. In Experiment 2, we selected two words from the event-establishing sentences which were most strongly associated with the unexpected but event-related targets in the final sentences. Each of the two associates was then placed in a neutral carrier sentence. We controlled that none of the other words in these carrier sentences was associatively related to the target words. Importantly, the two carrier sentences did not build up a coherent event. We recorded EEG while participants read the carrier sentences followed by the same final sentences as in Experiment 1. The results showed that as in Experiment 1 the amplitude of the N400 elicited by both types of unexpected target words was larger than the N400 elicited by the highly expected target. Moreover, we found a global tendency towards the critical difference between event-related and event-unrelated unexpected targets which reached statistical significance only at parietal electrodes over the right hemisphere. Because the difference between event-related and event-unrelated conditions was larger when the sentences formed a coherent event compared to when they did not, our results suggest that associative priming alone cannot account for the N400 pattern observed in our Experiment 1 (and in the study by Metusalem et al.). However, because part of the effect remained, probably due to associative facilitation, the findings demonstrate that during discourse reading both event knowledge activation and simple word associations jointly contribute to the prediction process. The results highlight that multiple mechanisms underlie predictive language processing. -
Jongman, S. R., Roelofs, A., Meyer, A. S., & Scheper, A. (2015). Sustained attention ability affects language production performance in typical and developmentally impaired children. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
-
Jongman, S. R., Roelofs, A., Meyer, A. S., & Scheper, A. (2015). Sustained attention ability affects language production performance in typical and developmentally impaired children. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valletta (Malta).
-
Meyer, A. S., Shao, Z., & Van Paridon, J. (2015). Producing complex noun phrases: The roles of word and phrase frequency. Talk presented at the Psychonomic Society's 56th Annual Meeting. Chicago, USA. 2015-11-19 - 2015-11-22.
Abstract
Janssen and Barber (2012) reported two studies on the production of complex noun phrases (Spanish and French noun-adjective and noun-noun phrases). They found that production latencies depended only on the frequencies of the phrases, but not on the frequencies of the individual words. This pattern may be seen as evidence for lexical storage of phrases and against the traditional “words & rules” view of the representation of linguistic knowledge. We will discuss a series of experiments on the production of Dutch adjective-noun phrases. We replicated the phrase frequency effect seen by Janssen and Barber, but also found a robust effect of the frequency of the first word of the phrase. We argue that the phrase frequency effect arises during the conceptual preparation of the utterance and that the results are consistent with the view that phrases are composed by combining individual words, in line with the “words & rules” view. -
Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. Poster presented at the 18th International Congress of Phonetic Sciences (ICPhS 2015), Glasgow, Scotland, UK.
Abstract
Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences.
The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group.
Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood. -
Rommers, J., Meyer, A. S., & Praamstra, P. (2015). Tracking double-object naming using the N2pc. Poster presented at the Seventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2015), Chicago, IL.
-
Shao, Z., & Meyer, A. S. (2015). Word and phrase frequency effects in noun phrase production. Talk presented at the Meeting at the Experimental Psychology Society 2015. London, UK. 2015-01-08.
-
Sjerps, M. J., & Meyer, A. S. (2015). The initiation of speech planning in turn-taking. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Malta.
-
Tromp, J., Peeters, D., Hagoort, P., & Meyer, A. S. (2015). Combining EEG and virtual reality: The N400 in a virtual environment. Talk presented at the 4th edition of the Donders Discussions (DD, 2015). Nijmegen, Netherlands. 2015-11-05 - 2015-11-06.
Abstract
A recurring criticism in the field of psycholinguistics and is the lack of ecological validity of experimental designs. For example, many experiments on sentence comprehension are conducted enclosed booths, where sentences are presented word by word on a computer screen. In addition, very often participants are instructed to make judgments that relate directly to the experimental manipulation. Thus, the contexts in which these processes are studied is quite restricted, which calls into question the generalizability of the results to more naturalistic environments. A possible solution to this problem is the use of virtual reality (VR) in psycholinguistic experiments. By immersing participants into a virtual environment, ecological validity can be increased while experimental control is maintained.
In the current experiment we combine electroencephalography (EEG) and VR to look at semantic processing in a more naturalistic setting. During the experiment, participants move through a visually rich virtual restaurant. Tables and avatars are placed in the restaurant and participants are instructed to stop at each table and look at the object (e.g. a plate with a steak) in front of the avatar. Then, the avatar will produce an utterance to accompany the object (e.g. “I think this steak is very nice”), in which the noun will either match (e.g. steak) or mismatch (e.g. mandarin) with the item on the table. Based on previous research, we predict a modulation of the N400, which should be larger in the mismatch than the match condition. Implications of the use of virtual reality for experimental research will be discussed. -
Tromp, J., Hagoort, P., & Meyer, A. S. (2015). Indirect request comprehension requires additional processing effort: A pupillometry study. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
-
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 14th International Pragmatics Conference, Antwerp, Belgium.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valetta, Malta.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Acheson, D. J., Veenstra, A., Meyer, A. S., & Hagoort, P. (2014). EEG pattern classification of semantic and syntactic Influences on subject-verb agreement in production. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
Subject-verb agreement is one of the most common
grammatical encoding operations in language
production. In many languages, morphological
inflection on verbs code for the number of the head
noun of a subject phrase (e.g., The key to the cabinets
is rusty). Despite the relative ease with which subjectverb
agreement is accomplished, people sometimes
make agreement errors (e.g., The key to the cabinets
are rusty). Such errors offer a window into the early
stages of production planning. Agreement errors are
influenced by both syntactic and semantic factors, and
are more likely to occur when a sentence contains either
conceptual or syntactic number mismatches. Little
is known about the timecourse of these influences,
however, and some controversy exists as to whether
they are independent. The current study was designed
to address these two issues using EEG. Semantic and
syntactic factors influencing number mismatch were
factorially-manipulated in a forced-choice sentence
completion paradigm. To avoid EEG artifact associated
with speaking, participants (N=20) were presented with
a noun-phrase, and pressed a button to indicate which
version of the verb ‘to be’ (is/are) should continue
the sentence. Semantic number was manipulated
using preambles that were semantically-integrated or
unintegrated. Semantic integration refers to the semantic
relationship between nouns in a noun-phrase, with
integrated items promoting conceptual-singularity.
The syntactic manipulation was the number (singular/
plural) of the local noun preceding the decision. This
led to preambles such as “The pizza with the yummy
topping(s)... “ (integated) vs. “The pizza with the tasty
bevarage(s)...” (unintegrated). Behavioral results showed
effects of both Local Noun Number and Semantic
Integration, with more errors and longer reaction times
occurring in the mismatching conditions (i.e., plural
local nouns; unintegrated subject phrases). Classic ERP
analyses locked to the local noun (0-700 ms) and to the
time preceding the response (-600 to 0 ms) showed no
systematic differences between conditions. Despite this
result, we assessed whether difference might emerge
using multivariate pattern analysis (MVPA). Using the
same epochs as above, support-vector machines with a
radial basis function were trained on the single-trial level
to classify the difference between Local Noun Number
and Semantic Integration conditions across time and
channels. Results revealed that both conditions could
be reliably classified at the single subject level, and
that classification accuracy was strongest in the epoch
preceding the response. Classification accuracy was
at chance when a classifier trained to dissociate Local
Noun Number was used to predict Semantic Integration
(and vice versa), providing some evidence of the
independence of the two effects. Significant inter-subject
variability was present in the channels and time-points
that were critical for classification, but earlier timepoints
were more often important for classifying Local Noun
Number than Semantic Integration. One result of this
variability is classification performed across subjects was
at chance, which may explain the failure to find standard
ERP effects. This study thus provides an important first
test of semantic and syntactic influences on subject-verb
agreement with EEG, and demonstrates that where
classic ERP analyses fail, MVPA can reliably distinguish
differences at the neurophysiological level. -
Hintz, F., Meyer, A. S., & Huettig, F. (2014). Mechanisms underlying predictive language processing. Talk presented at the 56. Tagung experimentell arbeitender Psychologen [TeaP, Conference on Experimental Psychology]. Giessen, Germany. 2014-03-31 - 2014-04-02.
-
Hintz, F., Meyer, A. S., & Huettig, F. (2014). Prediction using production or production engaging prediction?. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh (UK).
Abstract
Prominent theories of predictive language processing assume that language production processes are used to anticipate upcoming linguistic input during comprehension (Dell & Chang, 2014; Pickering & Garrod, 2013). Here, we explore the converse case: Does a task set including production in addition to comprehension encourage prediction, compared to a task only including comprehension? To test this hypothesis, we conducted a cross-modal naming experiment (Experiment 1) including an object naming task and a self-paced reading experiment (Experiment 2) that did not include overt production. We used the same predictable (N = 40) and non-predictable (N = 40) sentences in both experiments. The sentences consisted of a fixed agent, a transitive verb and a predictable or non-predictable target word (The man drinks a beer vs. The man buys a beer). Most of the empirical work on prediction used sentences in which the target words were highly predictable (often with a mean cloze probability > .8) and thus it is little surprising that participants engaged in predictive language processing very easily. In the current sentences, the mean cloze probability in the predictable sentences was .39 (ranging from .06 to .8; zero in the non-predictable sentences). If comprehenders are more likely to engage in predictive processing when the task set involves production, we should observe more pronounced effects of prediction in Experiment 1 as compared to Experiment 2. If production does not enhance prediction, we should observe similar effects of prediction in both experiments. In Experiment 1, participants (N = 54) listened to recordings of the sentences which ended right before the spoken target word. Coinciding with the end of the playback, a picture of the target word was shown which the participants were asked to name as fast as possible. Analyses of their naming latencies revealed a statistically significant naming advantage of 106 ms on predictable over non-predictable trials. Moreover, we found that the objects’ naming advantage was predicted by the target words’ cloze probability in the sentences (r = .411, p = .016). In Experiment 2, the same sentences were used in a self-paced reading experiment. To allow for testing of potential spill-over effects, we added a neutral prepositional phrase (buys a beer from the bar keeper/drinks a beer from the shop) to each sentence. Participants (N = 54) read the sentences word-by-word, advancing by pushing the space bar. On 30% of the trials, comprehension questions were used to keep up participants' focus on comprehending the sentences. Analyses of participants’ target and post-target reading times revealed numerical advantages of 6 ms and 20 ms, respectively, in the predictable as compared to the non-predictable condition. However, in both cases, this difference was not statistically reliable (t = .757, t = 1.43) and the significant positive correlation between an item’s naming advantage and its cloze probability as seen in Experiment 1 was absent (r = .037, p = .822). Importantly, the analysis of participants' responses to the comprehension questions, showed that they understood the sentences (mean accuracy = 93%). To conclude, although both experiments used the same sentences, we observed effects of prediction only when the task included production. In Experiment 2, no evidence for anticipation was found although participants clearly understood the sentences and the method has previously been shown to be sensitive to measure prediction effects (Van Berkum et al., 2005). Our results fit with a recent study by Gollan et al. (2011) who found only a small processing advantage of predictive over non-predictive sentences in reading (using highly predictable sentences with a cloze probability > . 87) but a strong prediction effect when participants read the same sentences and carried out an additional object naming task (see also Griffin & Bock, 1998). Taken together, the studies suggest that the comprehenders' task set exerts a powerful influence on the likelihood and magnitude of predictive language processing. When the task set involves language production, as is often the case in natural conversation, comprehenders might engage in prediction to a stronger degree than in pure comprehension tasks. Being able to predict words another person is about to say might optimize the comprehension process and enable smooth turn-taking. -
Hintz, F., Meyer, A. S., & Huettig, F. (2014). The influence of verb-specific featural restrictions, word associations, and production-based mechanisms on language-mediated anticipatory eye movements. Talk presented at the 27th annual CUNY conference on human sentence processing. Ohio State University, Columbus/Ohio (US). 2014-03-13 - 2014-03-15.
-
Jongman, S. R., Roelofs, A., & Meyer, A. S. (2014). Sustained attention in language production: An individual differences approach. Talk presented at the Experimental Psychology Society (EPS). Kent, England. 2014-04-15 - 2014-04-17.
-
Katzberg, D., Belke, E., Wrede, B., Ernst, J., Berwe, T., & Meyer, A. S. (2014). AUDIOMAX: A software using an automatic speech recognition system for fast ans accurate temporal analyses of word onsets in spoken utterances. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Moers, C., Meyer, A. S., & Janse, E. (2014). Effects of local predictability on eye fixation behavior in silent and oral reading for younger and older adults. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
-
Moers, C., Janse, E., & Meyer, A. S. (2014). Effects of local predictability on word durations and fixation rates in younger and older adults. Talk presented at Psycholinguistics in Flanders 2014 (PiF 2014). Ostend, Belgium. 2014-05-08 - 2014-05-09.
-
Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2014). Listeners recognize others’ speech better than their own. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
-
Veenstra, A., Acheson, D. J., & Meyer, A. S. (2014). Parallel planning and attraction in the production of subject-verb agreement. Poster presented at the International Workshop on Language Production 2014, Geneva.
-
Gerakaki, S., Sjerps, M. J., & Meyer, A. S. (2013). Planning speech affects memory of heard words. Poster presented at the 12th Psycholinguistics in Flanders Conference, Leuven, Belgium.
-
Hintz, F., & Meyer, A. S. (2013). Prediction and production of simple mathematical equations. Poster presented at the 18th Conference of the European Society for Cognitive Psychology (ESCOP 2013), Budapest, Hungary.
Abstract
An important issue in current psycholinguistics is the relationship between the production and comprehension systems. It has been argued that these systems are tightly linked, and that, in particular, listeners use the speech production system to predict upcoming content. We tested this view using a novel version of the visual world paradigm. Participants heard mathematical equations and looked at a clock face showing the numbers 1 to 12. On alternating trials they either heard a complete equation (3+8=11) or they heard the first part (3+8) and had to produce the solution (11, target hereafter) themselves. Participants were encouraged to look at the relevant numbers throughout the trial. On listening trials, the participants typically looked at the target before the onset of target name, and on speaking trials they typically looked at the target before naming it. However, the timing of the looks to the targets was slightly different, with participants looking earlier at the target when they had to speak themselves than when they listened. This suggests that predicting during listening and planning to speak are indeed very similar but not identical. The further methodological and theoretical consequences of the study will be discussed.
Share this page