Displaying 1 - 43 of 43
-
Alagöz, G., Eising, E., Mekki, Y., Bignardi, G., Fontanillas, P., 23andMe Research Team, Nivard, M. G., Luciano, M., Cox, N. J., Fisher, S. E., & Gordon, R. L. (2025). The shared genetic architecture and evolution of human language and musical rhythm. Nature Human Behaviour, 9, 376-390. doi:10.1038/s41562-024-02051-y.
Abstract
Rhythm and language-related traits are phenotypically correlated, but their genetic overlap is largely unknown. Here, we leveraged two large-scale genome-wide association studies performed to shed light on the shared genetics of rhythm (N=606,825) and dyslexia (N=1,138,870). Our results reveal an intricate shared genetic and neurobiological architecture, and lay groundwork for resolving longstanding debates about the potential co-evolution of human language and musical traits. -
Bethke, S., Meyer, A. S., & Hintz, F. (2025). The German Auditory and Image (GAudI) vocabulary test: A new German receptive vocabulary test and its relationships to other tests measuring linguistic experience. PLOS ONE, 20: e0318115. doi:10.1371/journal.pone.0318115.
Abstract
Humans acquire word knowledge through producing and comprehending spoken and written language. Word learning continues into adulthood and knowledge accumulates across the lifespan. Therefore, receptive vocabulary size is often conceived of as a proxy for linguistic experience and plays a central role in assessing individuals’ language proficiency. There is currently no valid open access test available for assessing receptive vocabulary size in German-speaking adults. We addressed this gap and developed the German Auditory and Image Vocabulary Test (GAudI). In the GAudI, participants are presented with spoken test words and have to indicate their meanings by selecting the corresponding picture from a set of four alternatives. Here we describe the development of the test and provide evidence for its validity. Specifically, we report a study in which 168 German-speaking participants completed the GAudI and five other tests tapping into linguistic experience: one test measuring print exposure, two tests measuring productive vocabulary, one test assessing knowledge of book language grammar, and a test of receptive vocabulary that was normed in adolescents. The psychometric properties of the GAudI and its relationships to the other tests demonstrate that it is a suitable tool for measuring receptive vocabulary size. We offer an open-access digital test environment that can be used for research purposes, accessible via https://ems13.mpi.nl/bq4_customizable_de/researchers_welcome.php. -
Bujok, R., Meyer, A. S., & Bosker, H. R. (2025). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech, 68(1), 181-203. doi:10.1177/00238309241258162.
Abstract
Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts. -
Coopmans, C. W., De Hoop, H., Tezcan, F., Hagoort, P., & Martin, A. E. (2025). Language-specific neural dynamics extend syntax into the time domain. PLOS Biology, 23: e3002968. doi:10.1371/journal.pbio.3002968.
Abstract
Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.e., abstract and not bound by time), so they must be translated into time-varying signals for speech comprehension and production. Here, we test 3 different models of the temporal spell-out of syntactic structure against brain activity of people listening to Dutch stories: an integratory bottom-up parser, a predictive top-down parser, and a mildly predictive left-corner parser. These models build exactly the same structure but differ in when syntactic information is added by the brain—this difference is captured in the (temporal distribution of the) complexity metric “incremental node count.” Using temporal response function models with both acoustic and information-theoretic control predictors, node counts were regressed against source-reconstructed delta-band activity acquired with magnetoencephalography. Neural dynamics in left frontal and temporal regions most strongly reflect node counts derived by the top-down method, which postulates syntax early in time, suggesting that predictive structure building is an important component of Dutch sentence comprehension. The absence of strong effects of the left-corner model further suggests that its mildly predictive strategy does not represent Dutch language comprehension well, in contrast to what has been found for English. Understanding when the brain projects its knowledge of syntax onto speech, and whether this is done in language-specific ways, will inform and constrain the development of mechanistic models of syntactic structure building in the brain. -
Goral, M., Antolovic, K., Hejazi, Z., & Schulz, F. M. (2025). Using a translanguaging framework to examine language production in a trilingual person with aphasia. Clinical Linguistics & Phonetics, 39(1), 1-20. doi:10.1080/02699206.2024.2328240.
Abstract
When language abilities in aphasia are assessed in clinical and research settings, the standard practice is to examine each language of a multilingual person separately. But many multilingual individuals, with and without aphasia, mix their languages regularly when they communicate with other speakers who share their languages. We applied a novel approach to scoring language production of a multilingual person with aphasia. Our aim was to discover whether the assessment outcome would differ meaningfully when we count accurate responses in only the target language of the assessment session versus when we apply a translanguaging framework, that is, count all accurate responses, regardless of the language in which they were produced. The participant is a Farsi-German-English speaking woman with chronic moderate aphasia. We examined the participant’s performance on two picture-naming tasks, an answering wh-question task, and an elicited narrative task. The results demonstrated that scores in English, the participant’s third-learned and least-impaired language did not differ between the two scoring methods. Performance in German, the participant’s moderately impaired second language benefited from translanguaging-based scoring across the board. In Farsi, her weakest language post-CVA, the participant’s scores were higher under the translanguaging-based scoring approach in some but not all of the tasks. Our findings suggest that whether a translanguaging-based scoring makes a difference in the results obtained depends on relative language abilities and on pragmatic constraints, with additional influence of the linguistic distances between the languages in question. -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2025). Child heritage speakers’ reading skills in the majority language and exposure to the heritage language support morphosyntactic prediction in speech. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728925000331.
Abstract
We examined the morphosyntactic prediction ability of child heritage speakers and the role of reading skills and language experience in predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in Turkish with monolingual (N=49, Mage=83 months) and heritage children, who were early bilinguals of Turkish and Dutch (N=30, Mage=90 months). We found quantitative differences in magnitude of the prediction ability of monolingual and heritage children; however, their overall prediction ability was on par. The heritage speakers’ prediction ability was facilitated by their reading skills in Dutch, but not in Turkish as well as by their heritage language exposure, but not by engagement in literacy activities. These findings emphasize the facilitatory role of reading skills and spoken language experience in predictive processing. This study is the first to show that in a developing bilingual mind, effects of reading-on-prediction can take place across modalities and across languages.Additional information
data and analysis scripts -
Karaca, F. (2025). On knowing what lies ahead: The interplay of prediction, experience, and proficiency. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
link to Radboud Repository -
Matetovici, M., Spruit, A., Colonnesi, C., Garnier‐Villarreal, M., & Noom, M. (2025). Parent and child gender effects in the relationship between attachment and both internalizing and externalizing problems of children between 2 and 5 years old: A dyadic perspective. Infant Mental Health Journal: Infancy and Early Childhood. Advance online publication. doi:10.1002/imhj.70002.
Abstract
Acknowledging that the parent–child attachment is a dyadic relationship, we investigated differences between pairs of parents and preschool children based on gender configurations in the association between attachment and problem behavior. We looked at mother–daughter, mother–son, father–daughter, and father–son dyads, but also compared mothers and fathers, daughters and sons, and same versus different gender pairs. We employed multigroup structural equation modeling to explore moderation effects of gender in a sample of 446 independent pairs of parents and preschool children (2–5 years old) from the Netherlands. A stronger association between both secure and avoidant attachment and internalizing problems was found for father–son dyads compared to father–daughter dyads. A stronger association between both secure and avoidant attachment and externalizing problems was found for mother–son dyads compared to mother–daughter and father–daughter dyads. Sons showed a stronger negative association between secure attachment and externalizing problems, a stronger positive association between avoidant attachment and externalizing problems, and a stronger negative association between secure attachment and internalizing problems compared to daughters. These results provide evidence for gender moderation and demonstrate that a dyadic approach can reveal patterns of associations that would not be recognized if parent and child gender effects were assessed separately.Additional information
analysis code -
Mazzini*, S., Seijdel*, N., & Drijvers*, L. (2025). Autistic individuals benefit from gestures during degraded speech comprehension. Autism, 29(2), 544-548. doi:10.1177/13623613241286570.
Abstract
*All authors contributed equally to this work
Meaningful gestures enhance degraded speech comprehension in neurotypical adults, but it is unknown whether this is the case for neurodivergent populations, such as autistic individuals. Previous research demonstrated atypical multisensory and speech-gesture integration in autistic individuals, suggesting that integrating speech and gestures may be more challenging and less beneficial for speech comprehension in adverse listening conditions in comparison to neurotypicals. Conversely, autistic individuals could also benefit from additional cues to comprehend speech in noise, as they encounter difficulties in filtering relevant information from noise. We here investigated whether gestural enhancement of degraded speech comprehension differs for neurotypical (n = 40, mean age = 24.1) compared to autistic (n = 40, mean age = 26.8) adults. Participants watched videos of an actress uttering a Dutch action verb in clear or degraded speech accompanied with or without a gesture, and completed a free-recall task. Gestural enhancement was observed for both autistic and neurotypical individuals, and did not differ between groups. In contrast to previous literature, our results demonstrate that autistic individuals do benefit from gestures during degraded speech comprehension, similar to neurotypicals. These findings provide relevant insights to improve communication practices with autistic individuals and to develop new interventions for speech comprehension. -
Ye, C., McQueen, J. M., & Bosker, H. R. (2025). Effect of auditory cues to lexical stress on the visual perception of gestural timing. Attention, Perception & Psychophysics. Advance online publication. doi:10.3758/s13414-025-03072-z.
Abstract
Speech is often accompanied by gestures. Since beat gestures—simple nonreferential up-and-down hand movements—frequently co-occur with prosodic prominence, they can indicate stress in a word and hence influence spoken-word recognition. However, little is known about the reverse influence of auditory speech on visual perception. The current study investigated whether lexical stress has an effect on the perceived timing of hand beats. We used videos in which a disyllabic word, embedded in a carrier sentence (Experiment 1) or in isolation (Experiment 2), was coupled with an up-and-down hand beat, while varying their degrees of asynchrony. Results from Experiment 1, a novel beat timing estimation task, revealed that gestures were estimated to occur closer in time to the pitch peak in a stressed syllable than their actual timing, hence reducing the perceived temporal distance between gestures and stress by around 60%. Using a forced-choice task, Experiment 2 further demonstrated that listeners tended to perceive a gesture, falling midway between two syllables, on the syllable receiving stronger cues to stress than the other, and this auditory effect was greater when gestural timing was most ambiguous. Our findings suggest that f0 and intensity are the driving force behind the temporal attraction effect of stress on perceived gestural timing. This study provides new evidence for auditory influences on visual perception, supporting bidirectionality in audiovisual interaction between speech-related signals that occur in everyday face-to-face communication. -
Mooijman, S., Schoonen, R., Goral, M., Roelofs, A., & Ruiter, M. B. (2025). Why do bilingual speakers with aphasia alternate between languages? A study into their experiences and mixing patterns. Aphasiology. Advance online publication. doi:10.1080/02687038.2025.2452928.
Abstract
Background
The factors that contribute to language alternation by bilingual speakers with aphasia have been debated. Some studies suggest that atypical language mixing results from impairments in language control, while others posit that mixing is a way to enhance communicative effectiveness. To address this question, most prior research examined the appropriateness of language mixing in connected speech tasks.
Aims
The goal of this study was to provide new insight into the question whether language mixing in aphasia reflects a strategy to enhance verbal effectiveness or involuntary behaviour resulting from impaired language control.
Methods & procedures
Semi-structured web-based interviews with bilingual speakers with aphasia (N = 19) with varying language backgrounds were conducted. The interviews were transcribed and coded for: (1) Self-reports regarding language control and compensation, (2) instances of language mixing, and (3) in two cases, instances of repair initiation.
Outcomes & results
The results showed that several participants reported language control difficulties but that the knowledge of additional languages could also be recruited to compensate for lexical retrieval problems. Most participants showed no or very few instances of mixing and the observed mixes appeared to adhere to the pragmatic context and known functions of switching. Three participants exhibited more marked switching behaviour and reported corresponding difficulties with language control. Instances of atypical mixing did not coincide with clear problems initiating conversational repair.
Conclusions
Our study highlights the variability in language mixing patterns of bilingual speakers with aphasia. Furthermore, most of the individuals in the study appeared to be able to effectively control their languages, and to alternate between their languages for compensatory purposes. Control deficits resulting in atypical language mixing were observed in a small number of participants. -
Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2025). Fast and slow errors: What naming latencies of errors reveal about the interplay of attentional control and word planning in speeded picture naming. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. doi:10.1037/xlm0001472.
Abstract
Speakers sometimes produce lexical errors, such as saying “salt” instead of “pepper.” This study aimed to better understand the origin of lexical errors by assessing whether they arise from a hasty selection and premature decision to speak (premature selection hypothesis) or from momentary attentional disengagement from the task (attentional lapse hypothesis). We analyzed data from a speeded picture naming task (Lampe et al., 2023) and investigated whether lexical errors are produced as fast as target (i.e., correct) responses, thus arising from premature selection, or whether they are produced more slowly than target responses, thus arising from lapses of attention. Using ex-Gaussian analyses, we found that lexical errors were slower than targets in the tail, but not in the normal part of the response time distribution, with the tail effect primarily resulting from errors that were not coordinates, that is, members of the target’s semantic category. Moreover, we compared the coordinate errors and target responses in terms of their word-intrinsic properties and found that they were overall more frequent, shorter, and acquired earlier than targets. Given the present findings, we conclude that coordinate errors occur due to a premature selection but in the context of intact attentional control, following the same lexical constraints as targets, while other errors, given the variability in their nature, may vary in their origin, with one potential source being lapses of attention. -
Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2025). From “I dance” to “she danced” with a flick of the hands: Audiovisual stress perception in Spanish. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02683-9.
Abstract
When talking, speakers naturally produce hand movements (co-speech gestures) that contribute to communication. Evidence in Dutch suggests that the timing of simple up-and-down, non-referential “beat” gestures influences spoken word recognition: the same auditory stimulus was perceived as CONtent (noun, capitalized letters indicate stressed syllables) when a beat gesture occurred on the first syllable, but as conTENT (adjective) when the gesture occurred on the second syllable. However, these findings were based on a small number of minimal pairs in Dutch, limiting the generalizability of the findings. We therefore tested this effect in Spanish, where lexical stress is highly relevant in the verb conjugation system, distinguishing bailo, “I dance” with word-initial stress from bailó, “she danced” with word-final stress. Testing a larger sample (N = 100), we also assessed whether individual differences in working memory capacity modulated how much individuals relied on the gestures in spoken word recognition. The results showed that, similar to Dutch, Spanish participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture, with the effect being strongest when the acoustic stress cues were most ambiguous. No evidence was found for by-participant effect sizes to be influenced by individual differences in phonological or visuospatial working memory. These findings reveal gestural-speech coordination impacts lexical stress perception in a language where listeners are regularly confronted with such lexical stress contrasts, highlighting the impact of gestures’ timing on prominence perception and spoken word recognition. -
Roos, N. M. (2025). Naming a picture in context: Paving the way to investigate language recovery after stroke. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
link to Radboud Repository -
Sander, J., Zhang, Y., & Rowland, C. F. (2025). Language acquisition occurs in multimodal social interaction: A commentary on Karadöller, Sümer and Özyürek [invited commentary]. First Language: advance online publication. doi:10.1177/01427237251326984.
Abstract
We argue that language learning occurs in triadic interactions, where caregivers and children engage not only with each other but also with objects, actions and non-verbal cues that shape language acquisition. We illustrate this using two studies on real-time interactions in spoken and signed language. The first examines shared book reading, showing how caregivers use speech, gestures and gaze coordination to establish joint attention, facilitating word-object associations. The second study explores joint attention in spoken and signed interactions, demonstrating that signing dyads rely on a wider range of multimodal behaviours – such as touch, vibrations and peripheral gaze – compared to speaking dyads. Our data highlight how different language modalities shape attentional strategies. We advocate for research that fully incorporates the dynamic interplay between language, attention and environment. -
Severijnen, G. G. A. (2025). A blessing in disguise: How prosodic variability challenges but also aids successful speech perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ter Bekke, M. (2025). On how gestures facilitate prediction and fast responding during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ter Bekke, M., Drijvers, L., & Holler, J. (2025). Co-speech hand gestures are used to predict upcoming meaning. Psychological Science, 36(4), 237-248. doi:10.1177/09567976251331041.
Abstract
In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders’ language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (e.g., “type”). A Cloze experiment showed that gestures improved explicit predictions of upcoming target words. Moreover, an EEG experiment showed that gestures reduced alpha and beta power during the pause, indicating anticipation, and reduced N400 amplitudes, demonstrating facilitated semantic processing. Thus, comprehenders use iconic gestures to predict upcoming meaning. Theories of linguistic prediction should incorporate communicative bodily signals as predictive cues to capture how language is processed in face-to-face interaction.Additional information
supplementary material -
Van Geert, E., Ding, R., & Wagemans, J. (2025). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts, 43(1), 250-275. doi:10.1177/02762374241245917.
Abstract
Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.Additional information
VanGeert_Ding_Wagemans_2024suppl_cross cultural comparison of....pdf -
Bank, R., Crasborn, O., & Van Hout, R. (2011). Variation in mouth actions with manual signs in Sign Language of the Netherlands (NGT). Sign Language & Linguistics, 14(2), 248-270. doi:10.1075/sll.14.2.02ban.
Abstract
Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT commonly have their origin in spoken Dutch. We conducted a corpus study to explore how frequent mouthings in fact are in NGT, whether there is variation within and between signs in mouthings, and how frequent temporal reduction occurs in mouthings. Answers to these questions can help us classify mouthings as being specified in the sign lexicon or as being instances of code-blending. We investigated a sample of 20 frequently occurring signs. We found that each sign in the sample co-occurs frequently with a mouthing, usually that of a specific Dutch lexical item. On the other hand, signs show variation in the way they co-occur with mouthings and mouth gestures. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilized in sign languages.Files private
Request files -
Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Measuring word learning performance in computational models and infants. In Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics. Frankfurt am Main, Germany, 24-27 Aug. 2011.
Abstract
In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete categories may obscure interesting phenomena in the continuous responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured by averaging over all stimuli. Because different words behaved differently for different speakers, we could not identify a phonetic basis for the differences. Implications and new predictions for infant behaviour are discussed. -
Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Thresholding word activations for response scoring - Modelling psycholinguistic data. In Proceedings of the 12th Annual Conference of the International Speech Communication Association [Interspeech 2011] (pp. 769-772). ISCA.
Abstract
In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete
categories may obscure interesting phenomena in the continuous
responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured
by averaging over all stimuli. Because different words behaved
differently for different speakers, we could not identify a phonetic
basis for the differences. Implications and new predictions for
infant behaviour are discussed.Additional information
https://www.isca-speech.org/archive/interspeech_2011/i11_0769.html -
Dijkstra, N., & Fikkert, P. (2011). Universal constraints on the discrimination of Place of Articulation? Asymmetries in the discrimination of 'paan' and 'taan' by 6-month-old Dutch infants. In N. Danis, K. Mesh, & H. Sung (
Eds. ), Proceedings of the 35th Annual Boston University Conference on Language Development. Volume 1 (pp. 170-182). Somerville, MA: Cascadilla Press. -
Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2011). The thickness of musical pitch: Psychophysical evidence for the Whorfian hypothesis. In L. Carlson, C. Hölscher, & T. Shipley (
Eds. ), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 537-542). Austin, TX: Cognitive Science Society. -
Dufau, S., Duñabeitia, J. A., Moret-Tatay, C., McGonigal, A., Peeters, D., Alario, F.-X., Balota, D. A., Brysbaert, M., Carreiras, M., Ferrand, L., Ktori, M., Perea, M., Rastle, K., Sasburg, O., Yap, M. J., Ziegler, J. C., & Grainger, J. (2011). Smart phone, smart science: How the use of smartphones can revolutionize research in cognitive science. PLoS One, 6(9), e24974. doi:10.1371/journal.pone.0024974.
Abstract
Investigating human cognitive faculties such as language, attention, and memory most often relies on testing small and homogeneous groups of volunteers coming to research facilities where they are asked to participate in behavioral experiments. We show that this limitation and sampling bias can be overcome by using smartphone technology to collect data in cognitive science experiments from thousands of subjects from all over the world. This mass coordinated use of smartphones creates a novel and powerful scientific ‘‘instrument’’ that yields the data necessary to test universal theories of cognition. This increase in power represents a potential revolution in cognitive science -
Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
-
Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.
Abstract
This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.Additional information
http://www.jstor.org/stable/suppl/10.1086/662127/suppl_file/Supplement_A.pdfFiles private
Request files -
Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.
Abstract
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution. -
Kolipakam, V., & Shanker, K. (2011). Comparing human-wildlife conflict across different landscapes: A framework for examing social, political and economic issues and a preliminary comparison between sites. Trondheim/Bangalore: Norwegian Institute of Nature Research (NINA) & Centre for Ecological Sciences (CES), Indian Institute of Science.
-
Mulder, K., & Hulstijn, J. H. (2011). Linguistic skills of adult native speakers, as a function of age and level of education. Applied Linguistics, 32, 475-494. doi:10.1093/applin/amr016.
Abstract
This study assessed, in a sample of 98 adult native speakers of Dutch, how their lexical skills and their speaking proficiency varied as a function of their age and level of education and profession (EP). Participants, categorized in terms of their age (18–35, 36–50, and 51–76 years old) and the level of their EP (low versus high), were tested on their lexical knowledge, lexical fluency, and lexical memory, and they performed four speaking tasks, differing in genre and formality. Speaking performance was rated in terms of communicative adequacy and in terms of number of words, number of T-units, words per T-unit, content words per T-unit, hesitations per T-unit, and grammatical errors per T-unit. Increasing age affected lexical knowledge positively but lexical fluency and memory negatively. High EP positively affected lexical knowledge and memory but EP did not affect lexical fluency. Communicative adequacy of the responses in the speaking tasks was positively affected by high EP but was not affected by age. It is concluded that, given the large variability in native speakers’ language knowledge and skills, studies investigating the question of whether second-language learners can reach native levels of proficiency, should take native-speaker variability into account.Additional information
Mulder_2011_Supplementary Data.doc -
Piai, V., Roelofs, A., & Schriefers, H. (2011). Semantic interference in immediate and delayed naming and reading: Attention and task decisions. Journal of Memory and Language, 64, 404-423. doi:10.1016/j.jml.2011.01.004.
Abstract
Disagreement exists about whether lexical selection in word production is a competitive process. Competition predicts semanticinterference from distractor words in immediate but not in delayed picture naming. In contrast, Janssen, Schirm, Mahon, and Caramazza (2008) obtained semanticinterference in delayed picture naming when participants had to decide between picture naming and oral reading depending on the distractor word’s colour. We report three experiments that examined the role of such taskdecisions. In a single-task situation requiring picture naming only (Experiment 1), we obtained semanticinterference in immediate but not in delayednaming. In a task-decision situation (Experiments 2 and 3), no semantic effects were obtained in immediate and delayed picture naming and word reading using either the materials of Experiment 1 or the materials of Janssen et al. (2008). We present an attentional account in which taskdecisions may hide or reveal semanticinterference from lexical competition depending on the amount of parallelism between task-decision and picture–word processing. -
Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (
Eds. ), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.Abstract
Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process. -
Rai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K. and 4 moreRai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K., Rai, S. M., Rai, R. K., Pettigrew, J., & Dirksmeyer, T. (2011). छिन्ताङ शब्दकोश तथा व्याकरण [Chintang Dictionary and Grammar]. Kathmandu, Nepal: Chintang Language Research Program.
Abstract
Trilingual dictionary -
Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: A review. Frontiers in Psychology, 2, 307. doi:10.1037/a0023328.
Abstract
E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker -
Roelofs, A., Piai, V., & Garrido Rodriguez, G. (2011). Attentional inhibition in bilingual naming performance: Evidence from delta-plot analyses. Frontiers in Psychology, 2, 184. doi:10.3389/fpsyg.2011.00184.
Abstract
It has been argued that inhibition is a mechanism of attentional control in bilingual language performance. Evidence suggests that effects of inhibition are largest in the tail of a response time (RT) distribution in non-linguistic and monolingual performance domains. We examined this for bilingual performance by conducting delta-plot analyses of naming RTs. Dutch-English bilingual speakers named pictures using English while trying to ignore superimposed neutral Xs or Dutch distractor words that were semantically related, unrelated, or translations. The mean RTs revealed semantic, translation, and lexicality effects. The delta plots leveled off with increasing RT, more so when the mean distractor effect was smaller as compared with larger. This suggests that the influence of inhibition is largest toward the distribution tail, corresponding to what is observed in other performance domains. Moreover, the delta plots suggested that more inhibition was applied by high- than low-proficiency individuals in the unrelated than the other distractor conditions. These results support the view that inhibition is a domain-general mechanism that may be optionally engaged depending on the prevailing circumstances. -
Roelofs, A., Piai, V., & Schriefers, H. (2011). Selective attention and distractor frequency in naming performance: Comment on Dhooge and Hartsuiker (2010). Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1032-1038. doi:10.1037/a0023328.
Abstract
E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker -
Ruiter, M. B., Kolk, H. H. J., Rietveld, T. C. M., Dijkstra, N., & Lotgering, E. (2011). Towards a quantitative measure of verbal effectiveness and efficiency in the Amsterdam-Nijmegen Everyday Language Test (ANELT). Aphasiology, 25, 961-975. doi:10.1080/02687038.2011.569892.
Abstract
Background: A well-known test for measuring verbal adequacy (i.e., verbal effectiveness) in mildly impaired aphasic speakers is the Amsterdam-Nijmegen Everyday Language Test (ANELT; Blomert, Koster, & Kean, 1995). Aphasia therapy practitioners score verbal adequacy qualitatively when they administer the ANELT to their aphasic clients in clinical practice. Aims: The current study investigated whether the construct validity of the ANELT could be further improved by substituting the qualitative score by a quantitative one, which takes the number of essential information units into account. The new quantitative measure could have the following advantages: the ability to derive a quantitative score of verbal efficiency, as well as improved sensitivity to detect changes in functional communication over time. Methods & Procedures: The current study systematically compared a new quantitative measure of verbal effectiveness with the current ANELT Comprehensibility scale, which is based on qualitative judgements. A total of 30 speakers of Dutch participated: 20 non-aphasic speakers and 10 aphasic patients with predominantly expressive disturbances. Outcomes & Results: Although our findings need to be replicated in a larger group of aphasic speakers, the main results suggest that the new quantitative measure of verbal effectiveness is more sensitive to detect change in verbal effectiveness over time. What is more, it can be used to derive a measure of verbal efficiency. Conclusions: The fact that both verbal effectiveness and verbal efficiency can be reliably as well as validly measured in the ANELT is of relevance to clinicians. It allows them to obtain a more complete picture of aphasic speakers' functional communication skills. -
Smith, A. C., & Monaghan, P. (2011). What are the functional units in reading? Evidence for statistical variation influencing word processing. In Connectionist Models of Neurocognition and Emergent Behavior: From Theory to Applications (pp. 159-172). Singapore: World Scientific.
Abstract
Computational models of reading have differed in terms of whether they propose a single route forming the mapping between orthography and phonology or whether there is a lexical/sublexical route distinction. A critical test of the architecture of the reading system is how it deals with multi-letter graphemes. Rastle and Coltheart (1998) found that the presence of digraphs in nonwords but not in words led to an increase in naming times, suggesting that nonwords were processed via a distinct sequential route to words. In contrast Pagliuca, Monaghan, and McIntosh (2008) implemented a single route model of reading and showed that under conditions of visual noise the presence of digraphs in words did have an effect on naming accuracy. In this study, we investigated whether such digraph effects could be found in both words and nonwords under conditions of visual noise. If so it would suggest that effects on words and nonwords are comparable. A single route connectionist model of reading showed greater accuracy for both words and nonwords containing digraphs. Experimental results showed participants were more accurate in recognising words if they contained digraphs. However contrary to model predictions they were less accurate in recognising nonwords containing digraphs compared to controls. We discuss the challenges faced by both theoretical perspectives in interpreting these findings and in light of a psycholinguistic grain size theory of reading. -
Van Hout, A., Veenstra, A., & Berends, S. (2011). All pronouns are not acquired equally in Dutch: Elicitation of object and quantitative pronouns. In M. Pirvulescu, M. C. Cuervo, A. T. Pérez-Leroux, J. Steele, & N. Strik (
Eds. ), Selected proceedings of the 4th Conference on Generative Approaches to Language Acquisition North America (GALANA 2010) (pp. 106-121). Somerville, MA: Cascadilla Proceedings Project.Abstract
This research reports the results of eliciting pronouns in two syntactic environments: Object pronouns and quantitative er (Q-er). Thus another type of language is added to the literature on subject and object clitic acquisition in the Romance languages (Jakubowicz et al., 1998; Hamann et al., 1996). Quantitative er is a unique pronoun in the Germanic languages; it has the same distribution as partitive clitics in Romance. Q-er is an N'-anaphor and occurs obligatorily with headless noun phrases with a numeral or weak quantifier. Q-er is licensed only when the context offers an antecedent; it binds an empty position in the NP. Data from typically-developing children aged 5;0-6;0 show that object and Q-er pronouns are not acquired equally; it is proposed that this is due to their different syntax. The use of Q-er involves more sophisticated syntactic knowledge: Q-er occurs at the left edge of the VP and binds an empty position in the NP, whereas object pronouns are simply stand-ins for full NPs and occur in the same position. These Dutch data reveal that pronouns are not used as exclusively as object clitics are in the Romance languages (Varlakosta, in prep.). -
Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.
Abstract
The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns. -
Versteegh, M., Ten Bosch, L., & Boves, L. (2011). Modelling novelty preference in word learning. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 761-764).
Abstract
This paper investigates the effects of novel words on a cognitively plausible computational model of word learning. The model is first familiarized with a set of words, achieving high recognition scores and subsequently offered novel words for training. We show that the model is able to recognize the novel words as different from the previously seen words, based on a measure of novelty that we introduce. We then propose a procedure analogous to novelty preference in infants. Results from simulations of word learning show that adding this procedure to our model speeds up training and helps the model attain higher recognition rates. -
Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2443.
Abstract
Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech. -
Witteman, M. J., Weber, A., & McQueen, J. M. (2011). On the relationship between perceived accentedness, acoustic similarity, and processing difficulty in foreign-accented speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2229-2232).
Abstract
Foreign-accented speech is often perceived as more difficult to understand than native speech. What causes this potential difficulty, however, remains unknown. In the present study, we compared acoustic similarity and accent ratings of American-accented Dutch with a cross-modal priming task designed to measure online speech processing. We focused on two Dutch diphthongs: ui and ij. Though both diphthongs deviated from standard Dutch to varying degrees and perceptually varied in accent strength, native Dutch listeners recognized words containing the diphthongs easily. Thus, not all foreign-accented speech hinders comprehension, and acoustic similarity and perceived accentedness are not always predictive of processing difficulties.
Share this page