Displaying 1 - 38 of 38
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at Psycholinguistics in Flanders (PiF 2017), Leuven, Belgium.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level. Poster presented at the Experimental Psychology Society Belfast Meeting, Belfast, UK.
Abstract
Conversation often involves simultaneous production and comprehension, yet little research has investigated whether these two processes interfere with one another. We tested participants’ ability to dual-task with production and comprehension tasks. Task one (production task) was picture naming. Task two (comprehension task) was either syllable identification (linguistic condition) or tone identification (non-linguistic condition). The two identification tasks were matched for difficulty. Three SOAs (50ms, 300ms, and 1800ms) resulted in different amounts of overlap between the production and comprehension tasks. We hypothesized that as production and comprehension use similar resources there would be greater interference with concurrent linguistic than non-linguistic tasks.
At the 50ms SOA, picture naming latencies were slower in the linguistic compared to the non-linguistic condition, suggesting that the resources required for production and comprehension overlap more in the linguistic condition. As the syllables were non-words without lexical representations, this interference likely occurs primarily at the phonological level. Across all SOAs, identification RTs were longer in the linguistic condition, showing that such phonological interference percolates through to the comprehension task, regardless of SOA. In sum, these results demonstrate that concurrent access to the phonological level in production and comprehension results in measurable interference in both speaking and comprehending.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Fairs, A., Bögels, S., & Meyer, A. S. (2017). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Hoedemaker, R. S., & Meyer, A. S. (2017). Coordination and preparation of utterances in a joint-naming task. Talk presented at the Experimental Psychology Society Belfast Meeting. Belfast, UK. 2017-04-10 - 2017-04-12.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). “That’s a spatelhouder!”: How source memory is influenced by speakers’ social categories in a word-learning paradigm. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2017). Speakers' social identity affects source memory for novel words. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Jongman, S. R., Meyer, A. S., & Piai, V. (2017). Brain signature of planning for production: An EEG study. Talk presented at the Workshop 'Revising formal semantic and pragmatic theories from a neurocognitive perspective'. Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Jongman, S. R., & Meyer, A. S. (2017). Simultaneous listening and planning for production: Full or partial comprehension?. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). When slow speech sounds fast: How the speech rate of one talker influences perception of another talker. Talk presented at the IPS workshop: Abstraction, Diversity, and Speech Dynamics. Herrsching am Ammersee, Germany. 2017-05-03 - 2017-05-05.
Abstract
Listeners are continuously exposed to a broad range of speech rates. Earlier work has shown that listeners perceive phonetic category boundaries relative to contextual speech rate. This process of rate-dependent speech perception has been suggested to occur across talker changes, with the speech rate of talker A influencing perception of talker B. This study tested whether a ‘global’ speech rate calculated over multiple talkers and over a longer period of time affected perception of the temporal Dutch vowel contrast /ɑ/-/a:/. First, Experiment 1 demonstrated that listeners more often reported hearing long /a:/ in fast contexts than in ‘neutral rate’ contexts, replicating earlier findings. Then, in Experiment 2, one participant group was exposed to ‘neutral’ speech from talker A intermixed with slow speech from talker B. Another group listened to the same ‘neutral’ speech from talker A, but to fast speech from talker B. Between-group comparison in the ‘neutral’ condition revealed that Group 1 reported more long /a:/ than Group 2, indicating that A’s ‘neutral’ speech sounded faster when B was slower. Finally, Experiment 3 tested whether talking at slow or fast rates oneself elicits the same ‘global’ rate effects. However, no evidence was found that self-produced speech modulated perception of talker A. This study corroborates the idea that ‘global’ rate-dependent effects occur across talkers, but are insensitive to one’s own speech rate. Results are interpreted in light of the general auditory mechanisms thought to underlie rate normalization, with implications for our understanding of dialogue.Additional information
http://www.phonetik.uni-muenchen.de/institut/veranstaltungen/abstraction-divers… -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech affects perception depends on who is talking. Poster presented at the Donders Poster Sessions, Nijmegen, The Netherlands.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /A/-/a:/. In Experiment 1, one low-rate group listened to ‘neutral’ rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the ‘neutral’ trials revealed that the low-rate group reported a higher proportion of /a:/ in A’s ‘neutral’ speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one’s own speech rate also contributes to effects of long-term tracking of rate. Here, talker B’s speech was replaced by playback of participants’ own fast or slow speech. No evidence was found that one’s own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. Poster presented at Interspeech 2017, Stockholm, Sweden.
Abstract
Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue. -
Meyer, A. S., Decuyper, C., & Coopmans, C. W. (2017). Distribution of attention in question-answer sequences: Evidence for limited parallel processing. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2017-01-03 - 2017-01-06.
-
Meyer, A. S. (2017). Towards understanding conversation: A psycholinguist's perspective. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the Inaugural Cultural Evolution Society Conference (CESC 2017). Jena, Germany. 2017-09-13 - 2017-09-15.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). Compositional structure can emerge without generational transmission. Talk presented at the 30th Annual CUNY Conference on Human Sentence Processing. Cambridge, MA, USA. 2017-03-30 - 2017-04-01.
-
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2017). The role of community size in the emergence of linguistic structure. Talk presented at XLanS: Triggers of language change in the Language Sciences. Lyon, France. 2017-10-11 - 2017-10-13.
-
Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2017). How we regulate speech rate: Phonetic evidence for a 'gain strategy' in speech planning. Poster presented at the Abstraction, Diversity and Speech Dynamics Workshop, Herrsching, Germany.
-
Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2017). Simulating speaking rate control: A spreading activation model of syllable timing. Poster presented at the Workshop Conversational speech and lexical representations, Nijmegen, The Netherlands.
Abstract
Speech can be produced at different rates. The ability to produce faster or slower speech may be thought to result from executive control processes enlisted to modulate lexical selection and phonological encoding stages of speech planning.
This study used simulations of the model of serial order in language by Dell, Burger and Svec (1997, DBS) to characterise the strategies adopted by speakers when naming pictures at fast, medium and slow prescribed rates. Our new implementation of DBS was able to produce activation patterns that correlated strongly with observed syllable-level timing of disyllabic words from this task.
For each participant, different speaking rates were associated with different regions of the DBS parameter space. The precise placement of the speaking rates in the parameter space differed markedly between participants. Participants applied broadly the same parameter manipulation to accelerate their speech. This was however not the case for deceleration. Hierarchical clustering revealed two distinct patterns of parameter adjustment employed to decelerate speech, suggesting that deceleration is not necessarily achieved by the inverse process of acceleration. In addition, potential refinements to the DBS model are discussed. -
Shao, Z., & Meyer, A. S. (2017). How word and phrase frequencies affect noun phrase production. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
-
Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2017). Combining Virtual Reality and EEG to study semantic and pragmatic processing in a naturalistic environment. Talk presented at the workshop 'Revising formal Semantic and Pragmatic theories from a Neurocognitive Perspective' (NeuroPragSem, 2017). Bochum, Germany. 2017-06-19 - 2017-06-20.
-
Van Paridon, J., Roelofs, A., & Meyer, A. S. (2017). Coordinating simultaneous comprehension and production: Behavioral and modelling findings from shadowing and simultaneous interpreting. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2017), Lancaster, UK.
-
Weber, K., Meyer, A. S., & Hagoort, P. (2017). Learning lexical-syntactic biases: An fMRI study on how we connect words and structures. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
-
Zormpa, E., Hoedemaker, R. S., Brehm, L., & Meyer, A. S. (2017). The production and generation effect in picture naming: How lexical access and articulation influence memory. Poster presented at the Donders Posters Session, Nijmegen, The Netherlands.
-
Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).
Abstract
When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410. -
Konopka, A. E., Van de Velde, M., & Meyer, A. S. (2012). Mapping “easy” and “hard” messages onto language: Conceptual and structural variables jointly affect the timecourse of sentence formulation. Poster presented at the 18th Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
Abstract
Sentence formulation requires mapping pre-verbal messages onto linguistic structures. This message-to-language
mapping is often evaluated in eye-tracking tasks where speakers describe pictured events (The dog chased the mailman).
Speakers can begin sentence formulation by quickly selecting the first-fixated character as the sentential starting point
(lexical incrementality), or generating a rudimentary sentence plan based on their construal of the event gist before
selecting a starting point (hierarchical incrementality; Kuchinsky & Bock, 2010). Lexical incrementality predicts fast
divergence of fixations while hierarchical incrementality predicts slower divergence of fixations to the two characters
within 200ms of picture onset. -
Lesage, E., Morgan, B., Olson, A., Meyer, A. S., & Miall, R. (2012). Disruption of right cerebellum with rTMS blocks predictive language processing. Poster presented at the 42nd annual meeting of the Society for Neuroscience [Neuroscience 2012] Poster# 379.07/UU5, New Orleans, LA.
Abstract
Much evidence demonstrates cerebellar involvement in language [1] but a theoretical framework about its precise role is lacking. In cerebellar motor control an influential model ascribes the cerebellum a predictive role [2]. It has been argued that cerebellar nonmotor regions perform similar computations as motor regions, and both are involved in online prediction [2]. We test this hypothesis by administering repetitive transcranial magnetic stimulation (rTMS) to the right cerebellum, a region implicated in language [3] during a predictive language task. Methods Visual World task [4]: Participants' eye movements were recorded while they listened to sentences and looked at a computer display of an agent and 4 objects, one of which (the target) was mentioned in the sentence. In the Prediction condition the object could be predicted on the basis of the verb; on Control trials it could not. We hypothesised that rTMS to the right cerebellum should make target fixation slower in the Prediction condition, but not in the Control condition. TMS protocol: TMS was delivered between two task blocks. In the cerebellar rTMS group (n = 22) the stimulation site was 1cm down and 3cm right of the inion. Participants received 10min of 1Hz rTMS. In addition, we tested two control groups. In the vertex rTMS group (n = 21), rTMS was applied at the same intensity, duration and frequency as in the cerebellar rTMS group, but over the vertex. In the no stimulation group (n = 22) the coil was placed over the cerebellar stimulation site but no pulses were delivered. Results As hypothesised, participants in the cerebellar rTMS group took longer to fixate the target after TMS in the Prediction condition but not in the Control condition (Block-by-Condition interaction: F(1,21) = 8.848, p = 0.007). This interaction was not found in either the vertex rTMS group (F(1,20) = 0.064, p = 0.802) or the no stimulation group (F(1,21) = 2.461, p = 0.132). Conclusions Here, we show that rTMS to the right cerebellum selectively affects linguistic prediction. These results provide additional evidence that the cerebellum plays a role in language and support theoretical accounts that the cerebellum contributes to nonmotor functions, as it does to motor functions, by online prediction. 1. Strick et al (2009). Cerebellum and nonmotor function. Annu Rev Neurosci, 32, 413-134 2. Miall et al (1993). Is the cerebellum a Smith predictor. J Mot Behav, 25, 203-216 3. Marien et al (2001). The lateralised linguistic cerebellum: a review and a new hypothesis. Brain and Language, 79, 580-600 4. Altmann & Kamide (1999). Incremental interpretation at verbs. Cognition, 73, 247-264 -
Meyer, A. S. (2012). What's in it for me? What's in it for me? Applying adult speech production models to young learners. Talk presented at a workshop at the University of Leiden. Leiden, The Netherlands. 2012-12.
-
Moers, C., Meyer, A. S., & Janse, E. (2012). Effects of transitional probabilities on word durations in read speech of younger & older speakers. Talk presented at the Workshop Fluent Speech: Combining Cognitive and Educational Approaches, Utrecht Institute of Linguistics. Utrecht, The Netherlands. 2012-11-12 - 2012-11-13.
-
Reifegerste, J., & Meyer, A. S. (2012). The influence of age on the mental representation of polymorphemic words in Dutch. Talk presented at the Conference on Morphological Complexity. London, UK. 2012-01-13 - 2012-01-15.
-
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
-
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
-
Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.
Abstract
In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates. -
Sjerps, M. J., & Meyer, A. S. (2012). Variation in cognitive demands across turn-taking. Poster presented at the 7th International Workshop on Language Production (IWOLP 2012), New York, United States.
-
Van de Velde, M., Konopka, A. E., & Meyer, A. S. (2012). Relative clause processing: Linking clause frequency and reading experience. Poster presented at the 11th Psycholinguistics in Flanders Conference [PIF 2012], Nijmegen, the Netherlands.
-
Veenstra, A., Acheson, D. J., Bock, K., & Meyer, A. S. (2012). Conceptual and grammatical factors in the production of subject-verb agreement. Poster presented at the 7th International Workshop on Language Production (IWOLP 2012), New York, United States.
-
Veenstra, A., Acheson, D. J., & Meyer, A. S. (2012). Conceptual and grammatical factors in the production of subject-verb agreement. Talk presented at The 11th edition of the Psycholinguistics in Flanders conference (PiF). Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
-
Veenstra, A., Acheson, D. J., & Meyer, A. S. (2012). Life after the spoken preamble completion paradigm. Talk presented at the 33th TABU Dag. Groningen, The Netherlands. 2012-06-18 - 2012-06-19.
Share this page