Displaying 1 - 19 of 19
-
Bai, F., Meyer, A. S., & Martin, A. E. (2022). Neural dynamics differentially encode phrases and sentences during spoken language comprehension. PLoS Biology, 20(7): e3001713. doi:10.1371/journal.pbio.3001713.
Abstract
Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics. -
Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). Visible lexical stress cues on the face do not influence audiovisual speech perception. In S. Frota, M. Cruz, & M. Vigário (
Eds. ), Proceedings of Speech Prosody 2022 (pp. 259-263). doi:10.21437/SpeechProsody.2022-53.Abstract
Producing lexical stress leads to visible changes on the face, such as longer duration and greater size of the opening of the mouth. Research suggests that these visual cues alone can inform participants about which syllable carries stress (i.e., lip-reading silent videos). This study aims to determine the influence of visual articulatory cues on lexical stress perception in more naturalistic audiovisual settings. Participants were presented with seven disyllabic, Dutch minimal stress pairs (e.g., VOORnaam [first name] & voorNAAM [respectable]) in audio-only (phonetic lexical stress continua without video), video-only (lip-reading silent videos), and audiovisual trials (e.g., phonetic lexical stress continua with video of talker saying VOORnaam or voorNAAM). Categorization data from video-only trials revealed that participants could distinguish the minimal pairs above chance from seeing the silent videos alone. However, responses in the audiovisual condition did not differ from the audio-only condition. We thus conclude that visual lexical stress information on the face, while clearly perceivable, does not play a major role in audiovisual speech perception. This study demonstrates that clear unimodal effects do not always generalize to more naturalistic multimodal communication, advocating that speech prosody is best considered in multimodal settings. -
Corps, R. E., Knudsen, B., & Meyer, A. S. (2022). Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition, 223: 105037. doi:10.1016/j.cognition.2022.105037.
Abstract
Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns. -
Creemers, A., & Meyer, A. S. (2022). The processing of ambiguous pronominal reference is sensitive to depth of processing. Glossa Psycholinguistics, 1(1): 3. doi:10.5070/G601166.
Abstract
Previous studies on the processing of ambiguous pronominal reference have led to contradictory results: some suggested that ambiguity may hinder processing (Stewart, Holler, & Kidd, 2007), while others showed an ambiguity advantage (Grant, Sloggett, & Dillon, 2020) similar to what has been reported for structural ambiguities. This study provides a conceptual replication of Stewart et al. (2007, Experiment 1), to examine whether the discrepancy in earlier results is caused by the processing depth that participants engage in (cf. Swets, Desmet, Clifton, & Ferreira, 2008). We present the results from a word-by-word self-paced reading experiment with Dutch sentences that contained a personal pronoun in an embedded clause that was either ambiguous or disambiguated through gender features. Depth of processing of the embedded clause was manipulated through offline comprehension questions. The results showed that the difference in reading times for ambiguous versus unambiguous sentences depends on the processing depth: a significant ambiguity penalty was found under deep processing but not under shallow processing. No significant ambiguity advantage was found, regardless of processing depth. This replicates the results in Stewart et al. (2007) using a different methodology and a larger sample size for appropriate statistical power. These findings provide further evidence that ambiguous pronominal reference resolution is a flexible process, such that the way in which ambiguous sentences are processed depends on the depth of processing of the relevant information. Theoretical and methodological implications of these findings are discussed.Additional information
experimental stimuli, data, and analysis code -
Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (
Eds. ), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2491-2496). Toronto, Canada: Cognitive Science Society.Abstract
Humans differ greatly in their ability to use language. Contemporary psycholinguistic theories assume that individual differences in language skills arise from variability in linguistic experience and in general cognitive skills. While much previous research has tested the involvement of select verbal and non-verbal variables in select domains of linguistic processing, comprehensive characterizations of the relationships among the skills underlying language use are rare. We contribute to such a research program by re-analyzing a publicly available set of data from 112 young adults tested on 35 behavioral tests. The tests assessed nine key constructs reflecting linguistic processing skills, linguistic experience and general cognitive skills. Correlation and hierarchical clustering analyses of the test scores showed that most of the tests assumed to measure the same construct correlated moderately to strongly and largely clustered together. Furthermore, the results suggest important roles of processing speed in comprehension, and of linguistic experience in production. -
Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.
Abstract
Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipicAdditional information
http://www.bcbl.eu/databases/multipic -
Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.
Abstract
In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
task, little research has investigated how two linguistic tasks are coordinated. However, such research
would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
conversation. In three dual-task experiments we studied how participants coordinated the processing of an
auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
selection took place, the pictures were presented with categorically related or unrelated distractor words. In
Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
representations also implicated in picture naming. The theoretical and methodological implications of the
findings are discussed.Additional information
1-s2.0-S0001691817305589-mmc1.pdf -
Konopka, A., Meyer, A. S., & Forest, T. A. (2018). Planning to speak in L1 and L2. Cognitive Psychology, 102, 72-104. doi:10.1016/j.cogpsych.2017.12.003.
Abstract
The leading theories of sentence planning – Hierarchical Incrementality and Linear Incrementality – differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2–4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality -
Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.
Abstract
Low-frequency neural entrainment to rhythmic input
has been hypothesized as a canonical mechanism
that shapes sensory perception in time. Neural
entrainment is deemed particularly relevant for
speech analysis, as it would contribute to the extraction
of discrete linguistic elements from continuous
acoustic signals. However, its causal influence in
speech perception has been difficult to establish.
Here, we provide evidence that oscillations build temporal
predictions about the duration of speech tokens
that affect perception. Using magnetoencephalography
(MEG), we studied neural dynamics during
listening to sentences that changed in speech rate.
Weobserved neural entrainment to preceding speech
rhythms persisting for several cycles after the change
in rate. The sustained entrainment was associated
with changes in the perceived duration of the last
word’s vowel, resulting in the perception of words
with different meanings. These findings support oscillatory
models of speech processing, suggesting that
neural oscillations actively shape speech perception. -
Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.
Abstract
Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings. -
Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.
Abstract
As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.Additional information
Data_Sheet_1.pdf -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (
Eds. ), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096. -
Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.
Abstract
In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research. -
Shao, Z., & Meyer, A. S. (2018). Word priming and interference paradigms. In A. M. B. De Groot, & P. Hagoort (
Eds. ), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 111-129). Hoboken: Wiley. -
Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.
Abstract
When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings. -
Belke, E., & Meyer, A. S. (2002). Tracking the time course of multidimensional stimulus discrimination: Analyses of viewing patterns and processing times during "same''-"different'' decisions. European Journal of Cognitive Psychology, 14(2), 237-266. doi:10.1080/09541440143000050.
Abstract
We investigated the time course of conjunctive ''same''-''different'' judgements for visually presented object pairs by means of combined reaction time and on-line eye movement measurements. The analyses of viewing patterns, viewing times, and reaction times showed that participants engaged in a parallel self-terminating search for differences. In addition, the results obtained for objects differing in only one dimension suggest that processing times may depend on the relative codability of the stimulus dimensions. The results are reviewed in a broader framework in view of higher-order processes. We propose that overspecifications of colour, often found in object descriptions, may have an ''early'' visual rather than a ''late'' linguistic origin. In a parallel assessment of the detection materials, participants overspecified the objects' colour substantially more often than their size. We argue that referential overspecifications of colour are largely attributable to mechanisms of visual discrimination. -
Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (2002). A theory of lexical access in speech production. In G. T. Altmann (
Ed. ), Psycholinguistics: critical concepts in psychology (pp. 278-377). London: Routledge. -
Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.
Abstract
The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production. -
Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (
Ed. ), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge.
Share this page