Displaying 1 - 61 of 61
  • Alagöz, G., Eising, E., Mekki, Y., Bignardi, G., Fontanillas, P., 23andMe Research Team, Nivard, M. G., Luciano, M., Cox, N. J., Fisher, S. E., & Gordon, R. L. (2025). The shared genetic architecture and evolution of human language and musical rhythm. Nature Human Behaviour, 9, 376-390. doi:10.1038/s41562-024-02051-y.

    Abstract

    Rhythm and language-related traits are phenotypically correlated, but their genetic overlap is largely unknown. Here, we leveraged two large-scale genome-wide association studies performed to shed light on the shared genetics of rhythm (N=606,825) and dyslexia (N=1,138,870). Our results reveal an intricate shared genetic and neurobiological architecture, and lay groundwork for resolving longstanding debates about the potential co-evolution of human language and musical traits.
  • Bethke, S., Meyer, A. S., & Hintz, F. (2025). The German Auditory and Image (GAudI) vocabulary test: A new German receptive vocabulary test and its relationships to other tests measuring linguistic experience. PLOS ONE, 20: e0318115. doi:10.1371/journal.pone.0318115.

    Abstract

    Humans acquire word knowledge through producing and comprehending spoken and written language. Word learning continues into adulthood and knowledge accumulates across the lifespan. Therefore, receptive vocabulary size is often conceived of as a proxy for linguistic experience and plays a central role in assessing individuals’ language proficiency. There is currently no valid open access test available for assessing receptive vocabulary size in German-speaking adults. We addressed this gap and developed the German Auditory and Image Vocabulary Test (GAudI). In the GAudI, participants are presented with spoken test words and have to indicate their meanings by selecting the corresponding picture from a set of four alternatives. Here we describe the development of the test and provide evidence for its validity. Specifically, we report a study in which 168 German-speaking participants completed the GAudI and five other tests tapping into linguistic experience: one test measuring print exposure, two tests measuring productive vocabulary, one test assessing knowledge of book language grammar, and a test of receptive vocabulary that was normed in adolescents. The psychometric properties of the GAudI and its relationships to the other tests demonstrate that it is a suitable tool for measuring receptive vocabulary size. We offer an open-access digital test environment that can be used for research purposes, accessible via https://ems13.mpi.nl/bq4_customizable_de/researchers_welcome.php.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2025). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech, 68(1), 181-203. doi:10.1177/00238309241258162.

    Abstract

    Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.
  • Coopmans, C. W., De Hoop, H., Tezcan, F., Hagoort, P., & Martin, A. E. (2025). Language-specific neural dynamics extend syntax into the time domain. PLOS Biology, 23: e3002968. doi:10.1371/journal.pbio.3002968.

    Abstract

    Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.e., abstract and not bound by time), so they must be translated into time-varying signals for speech comprehension and production. Here, we test 3 different models of the temporal spell-out of syntactic structure against brain activity of people listening to Dutch stories: an integratory bottom-up parser, a predictive top-down parser, and a mildly predictive left-corner parser. These models build exactly the same structure but differ in when syntactic information is added by the brain—this difference is captured in the (temporal distribution of the) complexity metric “incremental node count.” Using temporal response function models with both acoustic and information-theoretic control predictors, node counts were regressed against source-reconstructed delta-band activity acquired with magnetoencephalography. Neural dynamics in left frontal and temporal regions most strongly reflect node counts derived by the top-down method, which postulates syntax early in time, suggesting that predictive structure building is an important component of Dutch sentence comprehension. The absence of strong effects of the left-corner model further suggests that its mildly predictive strategy does not represent Dutch language comprehension well, in contrast to what has been found for English. Understanding when the brain projects its knowledge of syntax onto speech, and whether this is done in language-specific ways, will inform and constrain the development of mechanistic models of syntactic structure building in the brain.
  • Goral, M., Antolovic, K., Hejazi, Z., & Schulz, F. M. (2025). Using a translanguaging framework to examine language production in a trilingual person with aphasia. Clinical Linguistics & Phonetics, 39(1), 1-20. doi:10.1080/02699206.2024.2328240.

    Abstract

    When language abilities in aphasia are assessed in clinical and research settings, the standard practice is to examine each language of a multilingual person separately. But many multilingual individuals, with and without aphasia, mix their languages regularly when they communicate with other speakers who share their languages. We applied a novel approach to scoring language production of a multilingual person with aphasia. Our aim was to discover whether the assessment outcome would differ meaningfully when we count accurate responses in only the target language of the assessment session versus when we apply a translanguaging framework, that is, count all accurate responses, regardless of the language in which they were produced. The participant is a Farsi-German-English speaking woman with chronic moderate aphasia. We examined the participant’s performance on two picture-naming tasks, an answering wh-question task, and an elicited narrative task. The results demonstrated that scores in English, the participant’s third-learned and least-impaired language did not differ between the two scoring methods. Performance in German, the participant’s moderately impaired second language benefited from translanguaging-based scoring across the board. In Farsi, her weakest language post-CVA, the participant’s scores were higher under the translanguaging-based scoring approach in some but not all of the tasks. Our findings suggest that whether a translanguaging-based scoring makes a difference in the results obtained depends on relative language abilities and on pragmatic constraints, with additional influence of the linguistic distances between the languages in question.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2025). Child heritage speakers’ reading skills in the majority language and exposure to the heritage language support morphosyntactic prediction in speech. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728925000331.

    Abstract

    We examined the morphosyntactic prediction ability of child heritage speakers and the role of reading skills and language experience in predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in Turkish with monolingual (N=49, Mage=83 months) and heritage children, who were early bilinguals of Turkish and Dutch (N=30, Mage=90 months). We found quantitative differences in magnitude of the prediction ability of monolingual and heritage children; however, their overall prediction ability was on par. The heritage speakers’ prediction ability was facilitated by their reading skills in Dutch, but not in Turkish as well as by their heritage language exposure, but not by engagement in literacy activities. These findings emphasize the facilitatory role of reading skills and spoken language experience in predictive processing. This study is the first to show that in a developing bilingual mind, effects of reading-on-prediction can take place across modalities and across languages.

    Additional information

    data and analysis scripts
  • Karaca, F. (2025). On knowing what lies ahead: The interplay of prediction, experience, and proficiency. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Matetovici, M., Spruit, A., Colonnesi, C., Garnier‐Villarreal, M., & Noom, M. (2025). Parent and child gender effects in the relationship between attachment and both internalizing and externalizing problems of children between 2 and 5 years old: A dyadic perspective. Infant Mental Health Journal: Infancy and Early Childhood. Advance online publication. doi:10.1002/imhj.70002.

    Abstract

    Acknowledging that the parent–child attachment is a dyadic relationship, we investigated differences between pairs of parents and preschool children based on gender configurations in the association between attachment and problem behavior. We looked at mother–daughter, mother–son, father–daughter, and father–son dyads, but also compared mothers and fathers, daughters and sons, and same versus different gender pairs. We employed multigroup structural equation modeling to explore moderation effects of gender in a sample of 446 independent pairs of parents and preschool children (2–5 years old) from the Netherlands. A stronger association between both secure and avoidant attachment and internalizing problems was found for father–son dyads compared to father–daughter dyads. A stronger association between both secure and avoidant attachment and externalizing problems was found for mother–son dyads compared to mother–daughter and father–daughter dyads. Sons showed a stronger negative association between secure attachment and externalizing problems, a stronger positive association between avoidant attachment and externalizing problems, and a stronger negative association between secure attachment and internalizing problems compared to daughters. These results provide evidence for gender moderation and demonstrate that a dyadic approach can reveal patterns of associations that would not be recognized if parent and child gender effects were assessed separately.

    Additional information

    analysis code
  • Mazzini*, S., Seijdel*, N., & Drijvers*, L. (2025). Autistic individuals benefit from gestures during degraded speech comprehension. Autism, 29(2), 544-548. doi:10.1177/13623613241286570.

    Abstract

    *All authors contributed equally to this work
    Meaningful gestures enhance degraded speech comprehension in neurotypical adults, but it is unknown whether this is the case for neurodivergent populations, such as autistic individuals. Previous research demonstrated atypical multisensory and speech-gesture integration in autistic individuals, suggesting that integrating speech and gestures may be more challenging and less beneficial for speech comprehension in adverse listening conditions in comparison to neurotypicals. Conversely, autistic individuals could also benefit from additional cues to comprehend speech in noise, as they encounter difficulties in filtering relevant information from noise. We here investigated whether gestural enhancement of degraded speech comprehension differs for neurotypical (n = 40, mean age = 24.1) compared to autistic (n = 40, mean age = 26.8) adults. Participants watched videos of an actress uttering a Dutch action verb in clear or degraded speech accompanied with or without a gesture, and completed a free-recall task. Gestural enhancement was observed for both autistic and neurotypical individuals, and did not differ between groups. In contrast to previous literature, our results demonstrate that autistic individuals do benefit from gestures during degraded speech comprehension, similar to neurotypicals. These findings provide relevant insights to improve communication practices with autistic individuals and to develop new interventions for speech comprehension.
  • Ye, C., McQueen, J. M., & Bosker, H. R. (2025). Effect of auditory cues to lexical stress on the visual perception of gestural timing. Attention, Perception & Psychophysics. Advance online publication. doi:10.3758/s13414-025-03072-z.

    Abstract

    Speech is often accompanied by gestures. Since beat gestures—simple nonreferential up-and-down hand movements—frequently co-occur with prosodic prominence, they can indicate stress in a word and hence influence spoken-word recognition. However, little is known about the reverse influence of auditory speech on visual perception. The current study investigated whether lexical stress has an effect on the perceived timing of hand beats. We used videos in which a disyllabic word, embedded in a carrier sentence (Experiment 1) or in isolation (Experiment 2), was coupled with an up-and-down hand beat, while varying their degrees of asynchrony. Results from Experiment 1, a novel beat timing estimation task, revealed that gestures were estimated to occur closer in time to the pitch peak in a stressed syllable than their actual timing, hence reducing the perceived temporal distance between gestures and stress by around 60%. Using a forced-choice task, Experiment 2 further demonstrated that listeners tended to perceive a gesture, falling midway between two syllables, on the syllable receiving stronger cues to stress than the other, and this auditory effect was greater when gestural timing was most ambiguous. Our findings suggest that f0 and intensity are the driving force behind the temporal attraction effect of stress on perceived gestural timing. This study provides new evidence for auditory influences on visual perception, supporting bidirectionality in audiovisual interaction between speech-related signals that occur in everyday face-to-face communication.
  • Mooijman, S., Schoonen, R., Goral, M., Roelofs, A., & Ruiter, M. B. (2025). Why do bilingual speakers with aphasia alternate between languages? A study into their experiences and mixing patterns. Aphasiology. Advance online publication. doi:10.1080/02687038.2025.2452928.

    Abstract

    Background

    The factors that contribute to language alternation by bilingual speakers with aphasia have been debated. Some studies suggest that atypical language mixing results from impairments in language control, while others posit that mixing is a way to enhance communicative effectiveness. To address this question, most prior research examined the appropriateness of language mixing in connected speech tasks.
    Aims

    The goal of this study was to provide new insight into the question whether language mixing in aphasia reflects a strategy to enhance verbal effectiveness or involuntary behaviour resulting from impaired language control.
    Methods & procedures

    Semi-structured web-based interviews with bilingual speakers with aphasia (N = 19) with varying language backgrounds were conducted. The interviews were transcribed and coded for: (1) Self-reports regarding language control and compensation, (2) instances of language mixing, and (3) in two cases, instances of repair initiation.
    Outcomes & results

    The results showed that several participants reported language control difficulties but that the knowledge of additional languages could also be recruited to compensate for lexical retrieval problems. Most participants showed no or very few instances of mixing and the observed mixes appeared to adhere to the pragmatic context and known functions of switching. Three participants exhibited more marked switching behaviour and reported corresponding difficulties with language control. Instances of atypical mixing did not coincide with clear problems initiating conversational repair.
    Conclusions

    Our study highlights the variability in language mixing patterns of bilingual speakers with aphasia. Furthermore, most of the individuals in the study appeared to be able to effectively control their languages, and to alternate between their languages for compensatory purposes. Control deficits resulting in atypical language mixing were observed in a small number of participants.
  • Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2025). Fast and slow errors: What naming latencies of errors reveal about the interplay of attentional control and word planning in speeded picture naming. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. doi:10.1037/xlm0001472.

    Abstract

    Speakers sometimes produce lexical errors, such as saying “salt” instead of “pepper.” This study aimed to better understand the origin of lexical errors by assessing whether they arise from a hasty selection and premature decision to speak (premature selection hypothesis) or from momentary attentional disengagement from the task (attentional lapse hypothesis). We analyzed data from a speeded picture naming task (Lampe et al., 2023) and investigated whether lexical errors are produced as fast as target (i.e., correct) responses, thus arising from premature selection, or whether they are produced more slowly than target responses, thus arising from lapses of attention. Using ex-Gaussian analyses, we found that lexical errors were slower than targets in the tail, but not in the normal part of the response time distribution, with the tail effect primarily resulting from errors that were not coordinates, that is, members of the target’s semantic category. Moreover, we compared the coordinate errors and target responses in terms of their word-intrinsic properties and found that they were overall more frequent, shorter, and acquired earlier than targets. Given the present findings, we conclude that coordinate errors occur due to a premature selection but in the context of intact attentional control, following the same lexical constraints as targets, while other errors, given the variability in their nature, may vary in their origin, with one potential source being lapses of attention.
  • Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2025). From “I dance” to “she danced” with a flick of the hands: Audiovisual stress perception in Spanish. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02683-9.

    Abstract

    When talking, speakers naturally produce hand movements (co-speech gestures) that contribute to communication. Evidence in Dutch suggests that the timing of simple up-and-down, non-referential “beat” gestures influences spoken word recognition: the same auditory stimulus was perceived as CONtent (noun, capitalized letters indicate stressed syllables) when a beat gesture occurred on the first syllable, but as conTENT (adjective) when the gesture occurred on the second syllable. However, these findings were based on a small number of minimal pairs in Dutch, limiting the generalizability of the findings. We therefore tested this effect in Spanish, where lexical stress is highly relevant in the verb conjugation system, distinguishing bailo, “I dance” with word-initial stress from bailó, “she danced” with word-final stress. Testing a larger sample (N = 100), we also assessed whether individual differences in working memory capacity modulated how much individuals relied on the gestures in spoken word recognition. The results showed that, similar to Dutch, Spanish participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture, with the effect being strongest when the acoustic stress cues were most ambiguous. No evidence was found for by-participant effect sizes to be influenced by individual differences in phonological or visuospatial working memory. These findings reveal gestural-speech coordination impacts lexical stress perception in a language where listeners are regularly confronted with such lexical stress contrasts, highlighting the impact of gestures’ timing on prominence perception and spoken word recognition.
  • Roos, N. M. (2025). Naming a picture in context: Paving the way to investigate language recovery after stroke. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sander, J., Zhang, Y., & Rowland, C. F. (2025). Language acquisition occurs in multimodal social interaction: A commentary on Karadöller, Sümer and Özyürek [invited commentary]. First Language: advance online publication. doi:10.1177/01427237251326984.

    Abstract

    We argue that language learning occurs in triadic interactions, where caregivers and children engage not only with each other but also with objects, actions and non-verbal cues that shape language acquisition. We illustrate this using two studies on real-time interactions in spoken and signed language. The first examines shared book reading, showing how caregivers use speech, gestures and gaze coordination to establish joint attention, facilitating word-object associations. The second study explores joint attention in spoken and signed interactions, demonstrating that signing dyads rely on a wider range of multimodal behaviours – such as touch, vibrations and peripheral gaze – compared to speaking dyads. Our data highlight how different language modalities shape attentional strategies. We advocate for research that fully incorporates the dynamic interplay between language, attention and environment.
  • Severijnen, G. G. A. (2025). A blessing in disguise: How prosodic variability challenges but also aids successful speech perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M. (2025). On how gestures facilitate prediction and fast responding during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2025). Co-speech hand gestures are used to predict upcoming meaning. Psychological Science, 36(4), 237-248. doi:10.1177/09567976251331041.

    Abstract

    In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders’ language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (e.g., “type”). A Cloze experiment showed that gestures improved explicit predictions of upcoming target words. Moreover, an EEG experiment showed that gestures reduced alpha and beta power during the pause, indicating anticipation, and reduced N400 amplitudes, demonstrating facilitated semantic processing. Thus, comprehenders use iconic gestures to predict upcoming meaning. Theories of linguistic prediction should incorporate communicative bodily signals as predictive cues to capture how language is processed in face-to-face interaction.

    Additional information

    supplementary material
  • Van Geert, E., Ding, R., & Wagemans, J. (2025). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts, 43(1), 250-275. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Carrion Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: An overview. Dyslexia, 19(4), 214-240. doi:10.1002/dys.1464.

    Abstract

    Dyslexia is a highly heritable learning disorder with a complex underlying genetic architecture. Over the past decade, researchers have pinpointed a number of candidate genes that may contribute to dyslexia susceptibility. Here, we provide an overview of the state of the art, describing how studies have moved from mapping potential risk loci, through identification of associated gene variants, to characterization of gene function in cellular and animal model systems. Work thus far has highlighted some intriguing mechanistic pathways, such as neuronal migration, axon guidance, and ciliary biology, but it is clear that we still have much to learn about the molecular networks that are involved. We end the review by highlighting the past, present, and future contributions of the Dutch Dyslexia Programme to studies of genetic factors. In particular, we emphasize the importance of relating genetic information to intermediate neurobiological measures, as well as the value of incorporating longitudinal and developmental data into molecular designs
  • Dolscheid, S. (2013). High pitches and thick voices: The role of language in space-pitch associations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction

    A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use.

    The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Gialluisi, A., Incollu, S., Pippucci, T., Lepori, M. B., Zappu, A., Loudianos, G., & Romeo, G. (2013). The homozygosity index (HI) approach reveals high allele frequency for Wilson disease in the Sardinian population. European Journal of Human Genetics, 21, 1308-1311. doi:10.1038/ejhg.2013.43.

    Abstract

    Wilson disease (WD) is an autosomal recessive disorder resulting in pathological progressive copper accumulation in liver and other tissues. The worldwide prevalence (P) is about 30/million, while in Sardinia it is in the order of 1/10 000. However, all of these estimates are likely to suffer from an underdiagnosis bias. Indeed, a recent molecular neonatal screening in Sardinia reported a WD prevalence of 1:2707. In this study, we used a new approach that makes it possible to estimate the allelic frequency (q) of an autosomal recessive disorder if one knows the proportion between homozygous and compound heterozygous patients (the homozygosity index or HI) and the inbreeding coefficient (F) in a sample of affected individuals. We applied the method to a set of 178 Sardinian individuals (3 of whom born to consanguineous parents), each with a clinical and molecular diagnosis of WD. Taking into account the geographical provenance of the parents of every patient within Sardinia (to make F computation more precise), we obtained a q=0.0191 (F=7.8 × 10-4, HI=0.476) and a corresponding prevalence P=1:2732. This result confirms that the prevalence of WD is largely underestimated in Sardinia. On the other hand, the general reliability and applicability of the HI approach to other autosomal recessive disorders is confirmed, especially if one is interested in the genetic epidemiology of populations with high frequency of consanguineous marriages.
  • Gussenhoven, C., & Zhou, W. (2013). Revisiting pitch slope and height effects on perceived duration. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 1365-1369).

    Abstract

    The shape of pitch contours has been shown to have an effect on the perceived duration of vowels. For instance, vowels with high level pitch and vowels with falling contours sound longer than vowels with low level pitch. Depending on whether the
    comparison is between level pitches or between level and dynamic contours, these findings have been interpreted in two ways. For inter-level comparisons, where the duration results are the reverse of production results, a hypercorrection strategy in production has been proposed [1]. By contrast, for comparisons between level pitches and dynamic contours, the
    longer production data for dynamic contours have been held responsible. We report an experiment with Dutch and Chinese listeners which aimed to show that production data and perception data are each other’s opposites for high, low, falling and rising contours. We explain the results, which are consistent with earlier findings, in terms of the compensatory listening strategy of [2], arguing that the perception effects are due to a perceptual compensation of articulatory strategies and
    constraints, rather than that differences in production compensate for psycho-acoustic perception effects.
  • Hanique, I., Aalders, E., & Ernestus, M. (2013). How robust are exemplar effects in word comprehension? The mental lexicon, 8, 269-294. doi:10.1075/ml.8.3.01han.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
    gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
    speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
    object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
    each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
    utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
    that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Johnson, E. K., Lahey, M., Ernestus, M., & Cutler, A. (2013). A multimodal corpus of speech to infant and adult listeners. Journal of the Acoustical Society of America, 134, EL534-EL540. doi:10.1121/1.4828977.

    Abstract

    An audio and video corpus of speech addressed to 28 11-month-olds is described. The corpus allows comparisons between adult speech directed towards infants, familiar adults and unfamiliar adult addressees, as well as of caregivers’ word teaching strategies across word classes. Summary data show that infant-directed speech differed more from speech to unfamiliar than familiar adults; that word teaching strategies for nominals versus verbs and adjectives differed; that mothers mostly addressed infants with multi-word utterances; and that infants’ vocabulary size was unrelated to speech rate, but correlated positively with predominance of continuous caregiver speech (not of isolated words) in the input.
  • Kupisch, T., Akpinar, D., & Stoehr, A. (2013). Gender assignment and gender agreement in adult bilinguals and second learners of French. Linguistic Approaches to Bilingualism, 3, 150-179. doi:10.1075/lab.3.2.02kup.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.

    Additional information

    full text via Radboud Repository
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Roelofs, A., & Piai, V. (2013). Associative facilitation in the Stroop task: Comment on Mahon et al. Cortex, 49, 1767-1769. doi:10.1016/j.cortex.2013.03.001.

    Abstract

    First paragraph: A fundamental issue in psycholinguistics concerns how speakers retrieve intended words from long-term memory. According to a selection by competition account (e.g., Levelt
    et al., 1999), conceptually driven word retrieval involves the activation of a set of candidate words and a competitive selection
    of the intended word from this set.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Context effects and selective attention in picture naming and word reading: Competition versus response exclusion. Language and Cognitive Processes, 28, 655-671. doi:10.1080/01690965.2011.615663.

    Abstract

    For several decades, context effects in picture naming and word reading have been extensively investigated. However, researchers have found no agreement on the explanation of the effects. Whereas it has long been assumed that several types of effect reflect competition in word selection, recently it has been argued that these effects reflect the exclusion of articulatory responses from an output buffer. Here, we first critically evaluate the findings on context effects in picture naming that have been taken as evidence against the competition account, and we argue that the findings are, in fact, compatible with the competition account. Moreover, some of the findings appear to challenge rather than support the response exclusion account. Next, we compare the response exclusion and competition accounts with respect to their ability to explain data on word reading. It appears that response exclusion does not account well for context effects on word reading times, whereas computer simulations reveal that a competition model like WEAVER++ accounts for the findings.

    Files private

    Request files
  • Roelofs, A., Dijkstra, T., & Gerakaki, S. (2013). Modeling of word translation: Activation flow from concepts to lexical items. Bilingualism: Language and Cognition, 16, 343-353. doi:10.1017/S1366728912000612.

    Abstract

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to support the discrete-flow model. Here, we report results of computer simulations with the continuous-flow WEAVER++ model (Roelofs, 1992, 2006) demonstrating that the empirical observation taken to be in favor of discrete models is, in fact, only consistent with those models and equally compatible with more continuous models of word production by monolingual and bilingual speakers. Continuous models are specifically and independently supported by other empirical evidence on the effect of context pictures on native word production.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Selection by competition in word production: Rejoinder to Janssen (2012). Language and Cognitive Processes, 28, 679-683. doi:10.1080/01690965.2013.770890.

    Abstract

    Roelofs, Piai, and Schriefers argue that several findings on the effect of distractor words and pictures in producing words support a selection-by-competition account and challenge a non-competitive response-exclusion account. Janssen argues that the findings do not challenge response exclusion, and he conjectures that both competitive and non-competitive mechanisms underlie word selection. Here, we maintain that the findings do challenge the response-exclusion account and support the assumption of a single competitive mechanism underlying word selection.

    Files private

    Request files
  • Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not routinely influence performance during language processing. Psychological Science, 24, 2218-2225. doi:10.1177/0956797613490746.

    Abstract

    The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.

    Additional information

    DS_10.1177_0956797613490746.pdf
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to. Neuropsychologia, 51(3), 437-447. doi:10.1016/j.neuropsychologia.2012.12.002.

    Abstract

    When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, “In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice”). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance.
  • Rommers, J. (2013). Seeing what's next: Processing and anticipating language referring to objects. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sauppe, S., Norcliffe, E., Konopka, A. E., Van Valin Jr., R. D., & Levinson, S. C. (2013). Dependencies first: Eye tracking evidence from sentence production in Tagalog. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1265-1270). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the time course of sentence formulation in Tagalog, a verb-initial language in which the verb obligatorily agrees with one of its arguments. Eye-tracked participants described pictures of transitive events. Fixations to the two characters in the events were compared across sentences differing in agreement marking and post-verbal word order. Fixation patterns show evidence for two temporally dissociated phases in Tagalog sentence production. The first, driven by verb agreement, involves early linking of concepts to syntactic functions; the second, driven by word order, involves incremental lexical encoding of these concepts. These results suggest that even the earliest stages of sentence formulation may be guided by a language's grammatical structure.
  • Schepens, J., Dijkstra, T., Grootjen, F., & Van Heuven, W. J. (2013). Cross-language distributions of high frequency and phonetically similar cognates. PLoS One, 8(5): e63006. doi:10.1371/journal.pone.0063006.

    Abstract

    The coinciding form and meaning similarity of cognates, e.g. ‘flamme’ (French), ‘Flamme’ (German), ‘vlam’ (Dutch), meaning ‘flame’ in English, facilitates learning of additional languages. The cross-language frequency and similarity distributions of cognates vary according to evolutionary change and language contact. We compare frequency and orthographic (O), phonetic (P), and semantic similarity of cognates, automatically identified in semi-complete lexicons of six widely spoken languages. Comparisons of P and O similarity reveal inconsistent mappings in language pairs with deep orthographies. The frequency distributions show that cognate frequency is reduced in less closely related language pairs as compared to more closely related languages (e.g., French-English vs. German-English). These frequency and similarity patterns may support a better understanding of cognate processing in natural and experimental settings. The automatically identified cognates are available in the supplementary materials, including the frequency and similarity measurements.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.

    Abstract

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Van Putten, S. (2013). [Review of the book The expression of information structure. A documentation of its diversity across Africa, ed. by Ines Fiedler and Anne Schwarz]. Journal of African Languages and Linguistics, 34, 183 -186. doi:10.1515/jall-2013-0005.

    Abstract

    This volume contains 13 papers dealing with various aspects of information structure in a wide variety of African languages. They form the proceedings of a workshop organized by the Collaborative Research Center on Information Structure (University of Potsdam and Humboldt University, Berlin). In the introduction, the editors define the main contribution of this volume in terms of “the spectrum of information-structural notions and phenomena discussed, the investigation of information structure in several relatively unfamiliar languages and the genealogical width of the African languages studied.” (vii–viii emphasis added). In this sense it complements the previous volume on information structure in African languages published by the Collaborative Research Center and the University of Amsterdam (Aboh, Hartmann & Zimmermann, 2007), which was more theoryoriented.
  • Van der Zande, P. (2013). Hearing and seeing speech: Perceptual adjustments in auditory-visual speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van der Zande, P., Jesse, A., & Cutler, A. (2013). Lexically guided retuning of visual phonetic categories. Journal of the Acoustical Society of America, 134, 562-571. doi:10.1121/1.4807814.

    Abstract

    Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Verkerk, A., & Frostad, B. H. (2013). The encoding of manner predications and resultatives in Oceanic: A typological and historical overview. Oceanic Linguistics, 52, 1-35. doi:10.1353/ol.2013.0010.

    Abstract

    This paper is concerned with the encoding of resultatives and manner predications in Oceanic languages. Our point of departure is a typological overview of the encoding strategies and their geographical distribution, and we investigate their historical traits by the use of phylogenetic comparative methods. A full theory of the historical pathways is not always accessible for all the attested encoding strategies, given the data available for this study. However, tentative theories about the development and origin of the attested strategies are given. One of the most frequent strategy types used to encode both manner predications and resultatives has been given special emphasis. This is a construction in which a reex form of the Proto-Oceanic causative *pa-/*paka- modies the second verb in serial verb constructions

    Additional information

    52.1.verkerk_supp01.pdf
  • Verkerk, A. (2013). Scramble, scurry and dash: The correlation between motion event encoding and manner verb lexicon size in Indo-European. Language Dynamics and Change, 3, 169-217. doi:10.1163/22105832-13030202.

    Abstract

    In recent decades, much has been discovered about the different ways in which people can talk about motion (Talmy, 1985, 1991; Slobin, 1996, 1997, 2004). Slobin (1997) has suggested that satellite-framed languages typically have a larger and more diverse lexicon of manner of motion verbs (such as run, fly, and scramble) when compared to verb-framed languages. Slobin (2004) has claimed that larger manner of motion verb lexicons originate over time because codability factors increase the accessibility of manner in satellite-framed languages. In this paper I investigate the dependency between the use of the satellite-framed encoding construction and the size of the manner verb lexicon. The data used come from 20 Indo-European languages. The methodology applied is a range of phylogenetic comparative methods adopted from biology, which allow for an investigation of this dependency while taking into account the shared history between these 20 languages. The results provide evidence that Slobin’s hypothesis was correct, and indeed there seems to be a relationship between the use of the satellite-framed construction and the size of the manner verb lexicon
  • Witteman, M. J. (2013). Lexical processing of foreign-accented speech: Rapid and flexible adaptation. PhD Thesis, Radboud University Nijmegen, Nijmegen.

Share this page