Displaying 701 - 800 of 910
  • Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.

    Abstract

    During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing.
  • Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.
  • Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.

    Abstract

    Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion.
  • Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.

    Additional information

    full text via Radboud Repository
  • San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.

    Abstract

    To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception.
  • Schepens, J. (2015). Bridging linguistic gaps: The effects of linguistic distance on adult learnability of Dutch as an additional language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for
    addressees. Here, we investigate whether this ability is
    modulated by age. Younger and older adults participated in a
    comic narration task in which one participant (the speaker)
    narrated six short comic stories to another participant (the
    addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but
    not older speakers used more words and gestures when narrating novel story content as opposed to known content.
    We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.

    Abstract

    In different tasks involving action perception, performance has been found to be facilitated
    when the presented stimuli were produced by the participants themselves rather than by
    another participant. These results suggest that the same mental representations are
    accessed during both production and perception. However, with regard to spoken word perception,
    evidence also suggests that listeners’ representations for speech reflect the input
    from their surrounding linguistic community rather than their own idiosyncratic productions.
    Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners
    to frame their interpretations of incoming speech signals with regard to speaker identity.
    In order to determine whether word recognition evinces similar self-advantages as found in
    action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech
    representations accessed during perception of noise-vocoded speech are more reflective
    of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech.
  • Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Torreira, F., & Valtersson, E. (2015). Phonetic and visual cues to questionhood in French conversation. Phonetica, 72, 20-42. doi:10.1159/000381723.

    Abstract

    We investigate the extent to which French polar questions and continuation statements, two types of utterances with similar morphosyntactic and intonational forms but different pragmatic functions, can be distinguished in conversational data based on phonetic and visual bodily information. We show that the two utterance types can be distinguished well over chance level by automatic classification models including several phonetic and visual cues. We also show that a considerable amount of relevant phonetic and visual information is present before the last portion of the utterances, potentially assisting early speech act recognition by addressees. These findings indicate that bottom-up phonetic and visual cues may play an important role during the production and recognition of speech acts alongside top-down contextual information.
  • Tsuji, S., Mazuka, R., Cristia, A., & Fikkert, P. (2015). Even at 4 months, a labial is a good enough coronal, but not vice versa. Cognition, 134, 252-256. doi:10.1016/j.cognition.2014.10.009.

    Abstract

    Numerous studies have revealed an asymmetry tied to the perception of coronal place of articulation: participants accept a labial mispronunciation of a coronal target, but not vice versa. Whether or not this asymmetry is based on language-general properties or arises from language-specific experience has been a matter of debate. The current study suggests a bias of the first type by documenting an early, cross-linguistic asymmetry related to coronal place of articulation. Japanese and Dutch 4- and 6-month-old infants showed evidence of discrimination if they were habituated to a labial and then tested on a coronal sequence, but not vice versa. This finding has important implications for both phonological theories and infant speech perception research

    Additional information

    Tsuji_etal_suppl_2014.xlsx
  • Unsworth, S., Persson, L., Prins, T., & De Bot, K. (2015). An investigation of factors affecting early foreign language learning in the Netherlands. Applied Linguistics, 36(5), 527-548. doi:10.1093/applin/amt052.
  • Van de Velde, M., Kempen, G., & Harbusch, K. (2015). Dative alternation and planning scope in spoken language: A corpus study on effects of verb bias in VO and OV clauses of Dutch. Lingua, 165, 92-108. doi:10.1016/j.lingua.2015.07.006.

    Abstract

    The syntactic structure of main and subordinate clauses is determined to a considerable extent by verb biases. For example, some English and Dutch ditransitive verbs have a preference for the prepositional object dative, whereas others are typically used with the double object dative. In this study, we compare the effect of these biases on structure selection in (S)VO and (S)OV dative clauses in the Corpus of Spoken Dutch (CGN). This comparison allowed us to make inferences about the size of the advance planning scope during spontaneous speaking: If the verb is an obligatory component of clause-level advance planning scope, as is claimed by the hypothesis of hierarchical incrementality, then biases should exert their influence on structure choices, regardless of early (VO) or late (OV) position of the verb in the clause. Conversely, if planning proceeds in a piecemeal fashion, strictly guided by lexical availability, as claimed by linear incrementality, then the verb and its associated biases can only influence structure choices in VO sentences. We tested these predictions by analyzing structure choices in the CGN, using mixed logit models. Our results support a combination of linear and hierarchical incrementality, showing a significant influence of verb bias on structure choices in VO, and a weaker (but still significant) effect in OV clauses
  • Van de Velde, M. (2015). Incrementality and flexibility in sentence production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Rhijn, J. R., & Vernes, S. C. (2015). Retinoic acid signaling: A new piece in the spoken language puzzle. Frontiers in Psychology, 6: 1816. doi:10.3389/fpsyg.2015.01816.

    Abstract

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.
    FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output.
    We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken) language ready brain.
  • Verhees, M. W. F. T., Chwilla, D. J., Tromp, J., & Vissers, C. T. W. M. (2015). Contributions of emotional state and attention to the processing of syntactic agreement errors: evidence from P600. Frontiers in Psychology, 6: 388. doi:10.3389%2Ffpsyg.2015.00388.

    Abstract

    The classic account of language is that language processing occurs in isolation from other cognitive systems, like perception, motor action, and emotion. The central theme of this paper is the relationship between a participant’s emotional state and language comprehension. Does emotional context affect how we process neutral words? Recent studies showed that processing of word meaning – traditionally conceived as an automatic process – is affected by emotional state. The influence of emotional state on syntactic processing is less clear. One study reported a mood-related P600 modulation, while another study did not observe an effect of mood on syntactic processing. The goals of this study were: First, to clarify whether and if so how mood affects syntactic processing. Second, to shed light on the underlying mechanisms by separating possible effects of mood from those of attention on syntactic processing. Event-related potentials (ERPs) were recorded while participants read syntactically correct or incorrect sentences. Mood (happy vs. sad) was manipulated by presenting film clips. Attention was manipulated by directing attention to syntactic features vs. physical features. The mood induction was effective. Interactions between mood, attention and syntactic correctness were obtained, showing that mood and attention modulated P600. The mood manipulation led to a reduction in P600 for sad as compared to happy mood when attention was directed at syntactic features. The attention manipulation led to a reduction in P600 when attention was directed at physical features compared to syntactic features for happy mood. From this we draw two conclusions: First, emotional state does affect syntactic processing. We propose mood-related differences in the reliance on heuristics as the underlying mechanism. Second, attention can contribute to emotion-related ERP effects in syntactic language processing. Therefore, future studies on the relation between language and emotion will have to control for effects of attention
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2015). Syntactic predictability in the recognition of carefully and casually produced speech. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1684-1702. doi:10.1037/a0039326.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2015). Automaticity and stability of adaptation to foreign-accented speech. Language and Speech, 52(2), 168-189. doi:10.1177/0023830914528102.

    Abstract

    In three cross-modal priming experiments we asked whether adaptation to a foreign-accented speaker is automatic, and whether adaptation can be seen after a long delay between initial exposure and test. Dutch listeners were exposed to a Hebrew-accented Dutch speaker with two types of Dutch words: those that contained [ɪ] (globally accented words), and those in which the Dutch [i] was shortened to [ɪ] (specific accent marker words). Experiment 1, which served as a baseline, showed that native Dutch participants showed facilitatory priming for globally accented, but not specific accent, words. In experiment 2, participants performed a 3.5-minute phoneme monitoring task, and were tested on their comprehension of the accented speaker 24 hours later using the same cross-modal priming task as in experiment 1. During the phoneme monitoring task, listeners were asked to detect a consonant that was not strongly accented. In experiment 3, the delay between exposure and test was extended to 1 week. Listeners in experiments 2 and 3 showed facilitatory priming for both globally accented and specific accent marker words. Together, these results show that adaptation to a foreign-accented speaker can be rapid and automatic, and can be observed after a prolonged delay in testing.
  • Zhou, W. (2015). Assessing birth language memory in young adoptees. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Bergmann, C., Ten Bosch, L., & Boves, L. (2014). A computational model of the headturn preference procedure: Design, challenges, and insights. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes (pp. 125-136). World Scientific. doi:10.1142/9789814458849_0010.

    Abstract

    The Headturn Preference Procedure (HPP) is a frequently used method (e.g., Jusczyk & Aslin; and subsequent studies) to investigate linguistic abilities in infants. In this paradigm infants are usually first familiarised with words and then tested for a listening preference for passages containing those words in comparison to unrelated passages. Listening preference is defined as the time an infant spends attending to those passages with his or her head turned towards a flashing light and the speech stimuli. The knowledge and abilities inferred from the results of HPP studies have been used to reason about and formally model early linguistic skills and language acquisition. However, the actual cause of infants' behaviour in HPP experiments has been subject to numerous assumptions as there are no means to directly tap into cognitive processes. To make these assumptions explicit, and more crucially, to understand how infants' behaviour emerges if only general learning mechanisms are assumed, we introduce a computational model of the HPP. Simulations with the computational HPP model show that the difference in infant behaviour between familiarised and unfamiliar words in passages can be explained by a general learning mechanism and that many assumptions underlying the HPP are not necessarily warranted. We discuss the implications for conventional interpretations of the outcomes of HPP experiments.
  • Bergmann, C. (2014). Computational models of early language acquisition and the role of different voices. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • Buckler, H. (2014). The acquisition of morphophonological alternations across languages. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Hoogman, M., Arias Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H. G., Van Bokhoven, H., Franke, B., Hegenscheid, K., Homuth, G., Fisher, S. E., Grabe, H. J., Francks, C., & Hagoort, P. (2014). A genome wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus. Genes, Brain and Behavior, 13, 675-685. doi:10.1111/gbb.12157.

    Abstract

    Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical surface area and thickness, are heritable. To identify genetic variants that affect HG morphology, we conducted a genome-wide association scan (GWAS) meta-analysis in 3054 healthy individuals using HG surface area and thickness as quantitative traits. None of the single nucleotide polymorphisms (SNPs) showed association P values that would survive correction for multiple testing over the genome. The most significant association was found between right HG area and SNP rs72932726 close to gene DCBLD2 (3q12.1; P=2.77x10(-7)). This SNP was also associated with other regions involved in speech processing. The SNP rs333332 within gene KALRN (3q21.2; P=2.27x10(-6)) and rs143000161 near gene COBLL1 (2q24.3; P=2.40x10(-6)) were associated with the area and thickness of left HG, respectively. Both genes are involved in the development of the nervous system. The SNP rs7062395 close to the X-linked deafness gene POU3F4 was associated with right HG thickness (Xq21.1; P=2.38x10(-6)). This is the first molecular genetic analysis of variability in HG morphology
  • Choi, J. (2014). Rediscovering a forgotten language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Deriziotis, P., O'Roak, B. J., Graham, S. A., Estruch, S. B., Dimitropoulou, D., Bernier, R. A., Gerdts, J., Shendure, J., Eichler, E. E., & Fisher, S. E. (2014). De novo TBR1 mutations in sporadic autism disrupt protein functions. Nature Communications, 5: 4954. doi:10.1038/ncomms5954.

    Abstract

    Next-generation sequencing recently revealed that recurrent disruptive mutations in a few genes may account for 1% of sporadic autism cases. Coupling these novel genetic data to empirical assays of protein function can illuminate crucial molecular networks. Here we demonstrate the power of the approach, performing the first functional analyses of TBR1 variants identified in sporadic autism. De novo truncating and missense mutations disrupt multiple aspects of TBR1 function, including subcellular localization, interactions with co-regulators and transcriptional repression. Missense mutations inherited from unaffected parents did not disturb function in our assays. We show that TBR1 homodimerizes, that it interacts with FOXP2, a transcription factor implicated in speech/language disorders, and that this interaction is disrupted by pathogenic mutations affecting either protein. These findings support the hypothesis that de novo mutations in sporadic autism have severe functional consequences. Moreover, they uncover neurogenetic mechanisms that bridge different neurodevelopmental disorders involving language deficits.
  • Deriziotis, P., Graham, S. A., Estruch, S. B., & Fisher, S. E. (2014). Investigating protein-protein interactions in live cells using Bioluminescence Resonance Energy Transfer. Journal of visualized experiments, 87: e51438. doi:10.3791/51438.

    Abstract

    Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a ‘donor’ luciferase enzyme to an ‘acceptor’ fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.

    Additional information

    video
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.

    Abstract

    Numerous experiments show that space and musical pitch are
    closely linked in people's minds. However, the exact nature of
    space-pitch associations and their neuronal underpinnings are
    not well understood. In an fMRI experiment we investigated
    different types of spatial representations that may underlie
    musical pitch. Participants judged stimuli that varied in
    spatial height in both the visual and tactile modalities, as well
    as auditory stimuli that varied in pitch height. In order to
    distinguish between unimodal and multimodal spatial bases of
    musical pitch, we examined whether pitch activations were
    present in modality-specific (visual or tactile) versus
    multimodal (visual and tactile) regions active during spatial
    height processing. Judgments of musical pitch were found to
    activate unimodal visual areas, suggesting that space-pitch
    associations may involve modality-specific spatial
    representations, supporting a key assumption of embodied
    theories of metaphorical mental representation.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2014). Phoneme category retuning in a non-native language. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 553-557).

    Abstract

    Previous studies have demonstrated that native listeners
    modify their interpretation of a speech sound when a talker
    produces an ambiguous sound in order to quickly tune into a
    speaker, but there is hardly any evidence that non-native
    listeners employ a similar mechanism when encountering
    ambiguous pronunciations. So far, one study demonstrated
    this lexically-guided perceptual learning effect for nonnatives,
    using phoneme categories similar in the native
    language of the listeners and the non-native language of the
    stimulus materials. The present study investigates the question
    whether phoneme category retuning is possible in a nonnative
    language for a contrast, /l/-/r/, which is phonetically
    differently embedded in the native (Dutch) and nonnative
    (English) languages involved. Listening experiments indeed
    showed a lexically-guided perceptual learning effect.
    Assuming that Dutch listeners have different phoneme
    categories for the native Dutch and non-native English /r/, as
    marked differences between the languages exist for /r/, these
    results, for the first time, seem to suggest that listeners are not
    only able to retune their native phoneme categories but also
    their non-native phoneme categories to include ambiguous
    pronunciations.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Hammond, J. (2014). Switch-reference antecedence and subordination in Whitesands (Oceanic). In R. van Gijn, J. Hammond, D. Matić, S. van Putten, & A. V. Galucio (Eds.), Information structure and reference tracking in complex sentences. (pp. 263-290). Amsterdam: Benjamins.

    Abstract

    Whitesands is an Oceanic language of the southern Vanuatu subgroup. Like the related languages of southern Vanuatu, Whitesands has developed a clause-linkage system which monitors referent continuity on new clauses – typically contrasting with the previous clause. In this chapter I address how the construction interacts with topic continuity in discourse. I outline the morphosyntactic form of this anaphoric co-reference device. From a functionalist perspective, I show how the system is used in natural discourse and discuss its restrictions with respect to relative and complement clauses. I conclude with a discussion on its interactions with theoretical notions of information structure – in particular the nature of presupposed versus asserted clauses, information back- and foregrounding and how these affect the use of the switch-reference system
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.

    Abstract

    The use of virtual reality (VR) as a methodological tool is
    becoming increasingly popular in behavioural research due
    to its seemingly limitless possibilities. This new method has
    not been used frequently in the field of psycholinguistics,
    however, possibly due to the assumption that humancomputer
    interaction does not accurately reflect human-human
    interaction. In the current study we compare participants’
    language behaviour in a syntactic priming task with human
    versus avatar partners. Our study shows comparable priming
    effects between human and avatar partners (Human: 12.3%;
    Avatar: 12.6% for passive sentences) suggesting that VR is a
    valid platform for conducting language research and studying
    dialogue interactions.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Kunert, R., & Scheepers, C. (2014). Speed and accuracy of dyslexic versus typical word recognition: An eye-movement investigation. Frontiers in Psychology, 5: 1129. doi:10.3389/fpsyg.2014.01129.

    Abstract

    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition

    Additional information

    Kunert_Data Sheet 1.DOCX
  • Kupisch, T., Lein, T., Barton, D., Schröder, D. J., Stangen, I., & Stoehr, A. (2014). Acquisition outcomes across domains in adult simultaneous bilinguals with French as weaker and stronger language. Journal of French Language Studies, 24(3), 347-376. doi:10.1017/S0959269513000197.

    Abstract

    This study investigates the adult grammars of French simultaneous bilingual speakers (2L1s) whose other language is German. Apart from providing an example of French as heritage language in Europe, the goals of this paper are (i) to compare the acquisition of French in a minority and majority language context, (ii) to identify the relative vulnerability of individual domains, and (iii) to investigate whether 2L1s are vulnerable to language attrition when moving to their heritage country during adulthood. We include two groups of German-French 2L1s: One group grew up predominantly in France, but moved to Germany during adulthood; the other group grew up predominantly in Germany and stayed there. Performance is compared in different domains, including adjective placement, gender marking, articles, prepositions, foreign accent and voice onset time. Results indicate that differences between the two groups are minimal in morpho-syntax, but more prominent in pronunciation.
  • Lahey, M., & Ernestus, M. (2014). Pronunciation variation in infant-directed speech: Phonetic reduction of two highly frequent words. Language Learning and Development, 10, 308-327. doi:10.1080/15475441.2013.860813.

    Abstract

    In spontaneous conversations between adults, words are often pronounced with fewer segments or syllables than their citation forms. The question arises whether infant-directed speech also contains phonetic reduction. If so, infants would be presented with speech input that enables them to acquire reduced variants from an early age. This study compared speech directed at 11- and 12-month-old infants with adult-directed conversational speech and adult-directed read speech. In an acoustic study, 216 tokens of the Dutch words allemaal and helemaal from speech corpora were analyzed for duration, number of syllables, and vowel quality. In a perception study, adult participants rated these same materials for reduction and provided phonetic transcriptions. The results show that these two words are frequently reduced in infant-directed speech, and that their degree of reduction is comparable with conversational adult-directed speech. These findings suggest that lexical representations for reduced pronunciation variants can be acquired early in linguistic development

    Files private

    Request files
  • Lai, V. T., Garrido Rodriguez, G., & Narasimhan, B. (2014). Thinking-for-speaking in early and late bilinguals. Bilingualism: Language and Cognition, 17, 139-152. doi:10.1017/S1366728913000151.

    Abstract

    When speakers describe motion events using different languages, they subsequently classify those events in language-specific ways (Gennari, Sloman, Malt & Fitch, 2002). Here we ask if bilingual speakers flexibly shift their event classification preferences based on the language in which they verbally encode those events. English–Spanish bilinguals and monolingual controls described motion events in either Spanish or English. Subsequently they judged the similarity of the motion events in a triad task. Bilinguals tested in Spanish and Spanish monolinguals were more likely to make similarity judgments based on the path of motion versus bilinguals tested in English and English monolinguals. The effect is modulated in bilinguals by the age of acquisition of the second language. Late bilinguals based their judgments on path more often when Spanish was used to describe the motion events versus English. Early bilinguals had a path preference independent of the language in use. These findings support “thinking-for-speaking” (Slobin, 1996) in late bilinguals.
  • Lartseva, A., Dijkstra, T., Kan, C. C., & Buitelaar, J. K. (2014). Processing of emotion words by patients with Autism Spectrum Disorders: Evidence from reaction times and EEG. Journal of Autism and Developmental Disorders, 44, 2882-2894. doi:10.1007/s10803-014-2149-z.

    Abstract

    This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400-600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals
  • Lewis, A., Freeman-Mills, L., de la Calle-Mustienes, E., Giráldez-Pérez, R. M., Davis, H., Jaeger, E., Becker, M., Hubner, N. C., Nguyen, L. N., Zeron-Medina, J., Bond, G., Stunnenberg, H. G., Carvajal, J. J., Gomez-Skarmeta, J. L., Leedham, S., & Tomlinson, I. (2014). A polymorphic enhancer near GREM1 influences bowel cancer risk through diifferential CDX2 and TCF7L2 binding. Cell Reports, 8(4), Pages 983-990. doi:10.1016/j.celrep.2014.07.020.

    Abstract

    A rare germline duplication upstream of the bone morphogenetic protein antagonist GREM1 causes a Mendelian-dominant predisposition to colorectal cancer (CRC). The underlying disease mechanism is strong, ectopic GREM1 overexpression in the intestinal epithelium. Here, we confirm that a common GREM1 polymorphism, rs16969681, is also associated with CRC susceptibility, conferring ∼20% differential risk in the general population. We hypothesized the underlying cause to be moderate differences in GREM1 expression. We showed that rs16969681 lies in a region of active chromatin with allele- and tissue-specific enhancer activity. The CRC high-risk allele was associated with stronger gene expression, and higher Grem1 mRNA levels increased the intestinal tumor burden in ApcMin mice. The intestine-specific transcription factor CDX2 and Wnt effector TCF7L2 bound near rs16969681, with significantly higher affinity for the risk allele, and CDX2 overexpression in CDX2/GREM1-negative cells caused re-expression of GREM1. rs16969681 influences CRC risk through effects on Wnt-driven GREM1 expression in colorectal tumors.
  • Magi, A., Tattini, L., Palombo, F., Benelli, M., Gialluisi, A., Giusti, B., Abbate, R., Seri, M., Gensini, G. F., Romeo, G., & Pippucci, T. (2014). H3M2: Detection of runs of homozygosity from whole-exome sequencing data. Bioinformatics, 2852-2859. doi:10.1093/bioinformatics/btu401.

    Abstract

    Motivation: Runs of homozygosity (ROH) are sizable chromosomal stretches of homozygous genotypes, ranging in length from tens of kilobases to megabases. ROHs can be relevant for population and medical genetics, playing a role in predisposition to both rare and common disorders. ROHs are commonly detected by single nucleotide polymorphism (SNP) microarrays, but attempts have been made to use whole-exome sequencing (WES) data. Currently available methods developed for the analysis of uniformly spaced SNP-array maps do not fit easily to the analysis of the sparse and non-uniform distribution of the WES target design. Results: To meet the need of an approach specifically tailored to WES data, we developed (HM2)-M-3, an original algorithm based on heterogeneous hidden Markov model that incorporates inter-marker distances to detect ROH from WES data. We evaluated the performance of H-3 M-2 to correctly identify ROHs on synthetic chromosomes and examined its accuracy in detecting ROHs of different length (short, medium and long) from real 1000 genomes project data. H3M2 turned out to be more accurate than GERMLINE and PLINK, two state-of-the-art algorithms, especially in the detection of short and medium ROHs
  • Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.

    Abstract

    he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014.
  • Mulder, K., Dijkstra, T., Schreuder, R., & Baayen, R. H. (2014). Effects of primary and secondary morphological family size in monolingual and bilingual word processing. Journal of Memory and Language, 72, 59-84. doi:10.1016/j.jml.2013.12.004.

    Abstract

    This study investigated primary and secondary morphological family size effects in monolingual and bilingual processing, combining experimentation with computational modeling. Family size effects were investigated in an English lexical decision task for Dutch-English bilinguals and English monolinguals using the same materials. To account for the possibility that family size effects may only show up in words that resemble words in the native language of the bilinguals, the materials included, in addition to purely English items, Dutch-English cognates (identical and non-identical in form). As expected, the monolingual data revealed facilitatory effects of English primary family size. Moreover, while the monolingual data did not show a main effect of cognate status, only form-identical cognates revealed an inhibitory effect of English secondary family size. The bilingual data showed stronger facilitation for identical cognates, but as for monolinguals, this effect was attenuated for words with a large secondary family size. In all, the Dutch-English primary and secondary family size effects in bilinguals were strikingly similar to those of monolinguals. Computational simulations suggest that the primary and secondary family size effects can be understood in terms of discriminative learning of the English lexicon. (C) 2014 Elsevier Inc. All rights reserved.

    Files private

    Request files
  • Muysken, P., Hammarström, H., Birchall, J., Danielsen, S., Eriksen, L., Galucio, A. V., Van Gijn, R., Van de Kerke, S., Kolipakam, V., Krasnoukhova, O., Müller, N., & O'Connor, L. (2014). The languages of South America: Deep families, areal relationships, and language contact. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 299-323). Cambridge: Cambridge University Press.
  • Neger, T. M., Rietveld, T., & Janse, E. (2014). Relationship between perceptual learning in speech and statistical learning in younger and older adults. Frontiers in Human Neuroscience, 8: 628. doi:10.3389/fnhum.2014.00628.

    Abstract

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
  • O'Connor, L., & Kolipakam, V. (2014). Human migrations, dispersals, and contacts in South America. In L. O'Connor, & P. Muysken (Eds.), The native languages of South America: Origins, development, typology (pp. 29-55). Cambridge: Cambridge University Press.
  • Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.

    Abstract

    Early studies investigating sign language acquisition claimed
    that signs whose structures are motivated by the form of their
    referent (iconic) are not favoured in language development.
    However, recent work has shown that the first signs in deaf
    children’s lexicon are iconic. In this paper we go a step
    further and ask whether different types of iconicity modulate
    learning sign-referent links. Results from a picture description
    task indicate that children and adults used signs with two
    possible variants differentially. While children signing to
    adults favoured variants that map onto actions associated with
    a referent (action signs), adults signing to another adult
    produced variants that map onto objects’ perceptual features
    (perceptual signs). Parents interacting with children used
    more action variants than signers in adult-adult interactions.
    These results are in line with claims that language
    development is tightly linked to motor experience and that
    iconicity can be a communicative strategy in parental input.
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
  • Piai, V. (2014). Choosing our words: Lexical competition and the involvement of attention in spoken word production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.

    Abstract

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity.
  • Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.

    Abstract

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production.
  • Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.

    Abstract

    Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.

    Abstract

    Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
  • Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.

    Abstract

    Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities.
  • Reifegerste, J. (2014). Morphological processing in younger and older people: Evidence for flexible dual-route access. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.

    Abstract

    The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions
  • Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.

    Abstract

    In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering
  • Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.

    Abstract

    In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests.
  • Schmidt, J., Janse, E., & Scharenborg, O. (2014). Age, hearing loss and the perception of affective utterances in conversational speech. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 1929-1933).

    Abstract

    This study investigates whether age and/or hearing loss influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech fragments. Specifically, this study focuses on the relationship between participants' ratings of affective speech and acoustic parameters known to be associated with arousal and valence (mean F0, intensity, and articulation rate). Ten normal-hearing younger and ten older adults with varying hearing loss were tested on two rating tasks. Stimuli consisted of short sentences taken from a corpus of conversational affective speech. In both rating tasks, participants estimated the value of the emotion dimension at hand using a 5-point scale. For arousal, higher intensity was generally associated with higher arousal in both age groups. Compared to younger participants, older participants rated the utterances as less aroused, and showed a smaller effect of intensity on their arousal ratings. For valence, higher mean F0 was associated with more negative ratings in both age groups. Generally, age group differences in rating affective utterances may not relate to age group differences in hearing loss, but rather to other differences between the age groups, as older participants' rating patterns were not associated with their individual hearing loss.
  • Schoot, L., Menenti, L., Hagoort, P., & Segaert, K. (2014). A little more conversation - The influence of communicative context on syntactic priming in brain and behavior. Frontiers in Psychology, 5: 208. doi:10.3389/fpsyg.2014.00208.

    Abstract

    We report on an fMRI syntactic priming experiment in which we measure brain activity for participants who communicate with another participant outside the scanner. We investigated whether syntactic processing during overt language production and comprehension is influenced by having a (shared) goal to communicate. Although theory suggests this is true, the nature of this influence remains unclear. Two hypotheses are tested: i. syntactic priming effects (fMRI and RT) are stronger for participants in the communicative context than for participants doing the same experiment in a non-communicative context, and ii. syntactic priming magnitude (RT) is correlated with the syntactic priming magnitude of the speaker’s communicative partner. Results showed that across conditions, participants were faster to produce sentences with repeated syntax, relative to novel syntax. This behavioral result converged with the fMRI data: we found repetition suppression effects in the left insula extending into left inferior frontal gyrus (BA 47/45), left middle temporal gyrus (BA 21), left inferior parietal cortex (BA 40), left precentral gyrus (BA 6), bilateral precuneus (BA 7), bilateral supplementary motor cortex (BA 32/8) and right insula (BA 47). We did not find support for the first hypothesis: having a communicative intention does not increase the magnitude of syntactic priming effects (either in the brain or in behavior) per se. We did find support for the second hypothesis: if speaker A is strongly/weakly primed by speaker B, then speaker B is primed by speaker A to a similar extent. We conclude that syntactic processing is influenced by being in a communicative context, and that the nature of this influence is bi-directional: speakers are influenced by each other.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A. and 269 moreThompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A., Apostolova, L. G., Appel, K., Armstrong, N. J., Aribisala, B., Bastin, M. E., Bauer, M., Bearden, C. E., Bergmann, Ø., Binder, E. B., Blangero, J., Bockholt, H. J., Bøen, E., Bois, C., Boomsma, D. I., Booth, T., Bowman, I. J., Bralten, J., Brouwer, R. M., Brunner, H. G., Brohawn, D. G., Buckner, R. L., Buitelaar, J., Bulayeva, K., Bustillo, J. R., Calhoun, V. D., Cannon, D. M., Cantor, R. M., Carless, M. A., Caseras, X., Cavalleri, G. L., Chakravarty, M. M., Chang, K. D., Ching, C. R. K., Christoforou, A., Cichon, S., Clark, V. P., Conrod, P., Coppola, G., Crespo-Facorro, B., Curran, J. E., Czisch, M., Deary, I. J., de Geus, E. J. C., den Braber, A., Delvecchio, G., Depondt, C., de Haan, L., de Zubicaray, G. I., Dima, D., Dimitrova, R., Djurovic, S., Dong, H., Donohoe, G., Duggirala, R., Dyer, T. D., Ehrlich, S., Ekman, C. J., Elvsåshagen, T., Emsell, L., Erk, S., Espeseth, T., Fagerness, J., Fears, S., Fedko, I., Fernández, G., Fisher, S. E., Foroud, T., Fox, P. T., Francks, C., Frangou, S., Frey, E. M., Frodl, T., Frouin, V., Garavan, H., Giddaluru, S., Glahn, D. C., Godlewska, B., Goldstein, R. Z., Gollub, R. L., Grabe, H. J., Grimm, O., Gruber, O., Guadalupe, T., Gur, R. E., Gur, R. C., Göring, H. H. H., Hagenaars, S., Hajek, T., Hall, G. B., Hall, J., Hardy, J., Hartman, C. A., Hass, J., Hatton, S. N., Haukvik, U. K., Hegenscheid, K., Heinz, A., Hickie, I. B., Ho, B.-C., Hoehn, D., Hoekstra, P. J., Hollinshead, M., Holmes, A. J., Homuth, G., Hoogman, M., Hong, L. E., Hosten, N., Hottenga, J.-J., Pol, H. E. H., Hwang, K. S., Jr, C. R. J., Jenkinson, M., Johnston, C., Jönsson, E. G., Kahn, R. S., Kasperaviciute, D., Kelly, S., Kim, S., Kochunov, P., Koenders, L., Krämer, B., Kwok, J. B. J., Lagopoulos, J., Laje, G., Landen, M., Landman, B. A., Lauriello, J., Lawrie, S. M., Lee, P. H., Le Hellard, S., Lemaître, H., Leonardo, C. D., Li, C.-s., Liberg, B., Liewald, D. C., Liu, X., Lopez, L. M., Loth, E., Lourdusamy, A., Luciano, M., Macciardi, F., Machielsen, M. W. J., MacQueen, G. M., Malt, U. F., Mandl, R., Manoach, D. S., Martinot, J.-L., Matarin, M., Mather, K. A., Mattheisen, M., Mattingsdal, M., Meyer-Lindenberg, A., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meisenzahl, E., Melle, I., Milaneschi, Y., Mohnke, S., Montgomery, G. W., Morris, D. W., Moses, E. K., Mueller, B. A., Maniega, S. M., Mühleisen, T. W., Müller-Myhsok, B., Mwangi, B., Nauck, M., Nho, K., Nichols, T. E., Nilsson, L.-G., Nugent, A. C., Nyberg, L., Olvera, R. L., Oosterlaan, J., Ophoff, R. A., Pandolfo, M., Papalampropoulou-Tsiridou, M., Papmeyer, M., Paus, T., Pausova, Z., Pearlson, G. D., Penninx, B. W., Peterson, C. P., Pfennig, A., Phillips, M., Pike, G. B., Poline, J.-B., Potkin, S. G., Pütz, B., Ramasamy, A., Rasmussen, J., Rietschel, M., Rijpkema, M., Risacher, S. L., Roffman, J. L., Roiz-Santiañez, R., Romanczuk-Seiferth, N., Rose, E. J., Royle, N. A., Rujescu, D., Ryten, M., Sachdev, P. S., Salami, A., Satterthwaite, T. D., Savitz, J., Saykin, A. J., Scanlon, C., Schmaal, L., Schnack, H. G., Schork, A. J., Schulz, S. C., Schür, R., Seidman, L., Shen, L., Shoemaker, J. M., Simmons, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soares, J. C., Sponheim, S. R., Sprooten, E., Starr, J. M., Steen, V. M., Strakowski, S., Strike, L., Sussmann, J., Sämann, P. G., Teumer, A., Toga, A. W., Tordesillas-Gutierrez, D., Trabzuni, D., Trost, S., Turner, J., Van den Heuvel, M., van der Wee, N. J., van Eijk, K., van Erp, T. G. M., van Haren, N. E. M., van Ent, D. ‘., van Tol, M.-J., Hernández, M. C. V., Veltman, D. J., Versace, A., Völzke, H., Walker, R., Walter, H., Wang, L., Wardlaw, J. M., Weale, M. E., Weiner, M. W., Wen, W., Westlye, L. T., Whalley, H. C., Whelan, C. D., White, T., Winkler, A. M., Wittfeld, K., Woldehawariat, G., Wolf, C., Zilles, D., Zwiers, M. P., Thalamuthu, A., Schofield, P. R., Freimer, N. B., Lawrence, N. S., & Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging and Behavior, 8(2), 153-182. doi:10.1007/s11682-013-9269-5.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA’s first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way
  • Thorgrimsson, G. (2014). Infants' understanding of communication as participants and observers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Frontiers in Psychology, 5: 321. doi:10.3389/fpsyg.2014.00321.

    Abstract

    We investigated 14-month-old infants’ expectations toward a third party addressee of communicative gestures and an instrumental action. Infants’ eye movements were tracked as they observed a person (the Gesturer) point, direct a palm-up request gesture, or reach toward an object, and another person (the Addressee) respond by grasping it. Infants’ looking patterns indicate that when the Gesturer pointed or used the palm-up request, infants anticipated that the Addressee would give the object to the Gesturer, suggesting that they ascribed a motive of request to the gestures. In contrast, when the Gesturer reached for the object, and in a control condition where no action took place, the infants did not anticipate the Addressee’s response. The results demonstrate that infants’ recognition of communicative gestures extends to others’ interactions, and that infants can anticipate how third-party addressees will respond to others’ gestures.
  • Tsuji, S., & Cristia, A. (2014). Perceptual attunement in vowels: A meta-analysis. Developmental Psychobiology, 56(2), 179-191. doi:10.1002/dev.21179.

    Abstract

    Although the majority of evidence on perceptual narrowing in speech sounds is based on consonants, most models of infant speech perception generalize these findings to vowels, assuming that vowel perception improves for vowel sounds that are present in the infant's native language within the first year of life, and deteriorates for non-native vowel sounds over the same period of time. The present meta-analysis contributes to assessing to what extent these descriptions are accurate in the first comprehensive quantitative meta-analysis of perceptual narrowing in infant vowel discrimination, including results from behavioral, electrophysiological, and neuroimaging methods applied to infants 0–14 months of age. An analysis of effect sizes for native and non-native vowel discrimination over the first year of life revealed that they changed with age in opposite directions, being significant by about 6 months of age
  • Tsuji, S., Nishikawa, K., & Mazuka, R. (2014). Segmental distributions and consonant-vowel association patterns in Japanese infant- and adult-directed speech. Journal of Child Language, 41, 1276-1304. doi:10.1017/S0305000913000469.

    Abstract

    Japanese infant-directed speech (IDS) and adult-directed speech (ADS) were compared on their segmental distributions and consonant-vowel association patterns. Consistent with findings in other languages, a higher ratio of segments that are generally produced early was found in IDS compared to ADS: more labial consonants and low-central vowels, but fewer fricatives. Consonant-vowel associations also favored the early-produced labial-central, coronal-front, coronal-central, and dorsal-back patterns. On the other hand, clear language-specific patterns included a higher frequency of dorsals, affricates, geminates and moraic nasals in IDS. These segments are frequent in adult Japanese, but not in the early productions or the IDS of other studied languages. In combination with previous results, the current study suggests that both fine-tuning (an increased use of early-produced segments) and highlighting (an increased use of language-specifically relevant segments) might modify IDS on segmental level.
  • Tsuji, S. (2014). The road to native listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van der Zande, P., Jesse, A., & Cutler, A. (2014). Cross-speaker generalisation in two phoneme-level perceptual adaptation processes. Journal of Phonetics, 43, 38-46. doi:10.1016/j.wocn.2014.01.003.

    Abstract

    Speech perception is shaped by listeners' prior experience with speakers. Listeners retune their phonetic category boundaries after encountering ambiguous sounds in order to deal with variations between speakers. Repeated exposure to an unambiguous sound, on the other hand, leads to a decrease in sensitivity to the features of that particular sound. This study investigated whether these changes in the listeners' perceptual systems can generalise to the perception of speech from a novel speaker. Specifically, the experiments looked at whether visual information about the identity of the speaker could prevent generalisation from occurring. In Experiment 1, listeners retuned auditory category boundaries using audiovisual speech input. This shift in the category boundaries affected perception of speech from both the exposure speaker and a novel speaker. In Experiment 2, listeners were repeatedly exposed to unambiguous speech either auditorily or audiovisually, leading to a decrease in sensitivity to the features of the exposure sound. Here, too, the changes affected the perception of both the exposure speaker and the novel speaker. Together, these results indicate that changes in the perceptual system can affect the perception of speech from a novel speaker and that visual speaker identity information did not prevent this generalisation.
  • Van Gijn, R., Hammond, J., Matić, D., Van Putten, S., & Galucio, A. V. (Eds.). (2014). Information structure and reference tracking in complex sentences. Amsterdam: Benjamins.

    Abstract

    This volume is dedicated to exploring the crossroads where complex sentences and information management – more specifically information structure and reference tracking – come together. Complex sentences are a highly relevant but understudied domain for studying notions of IS and RT. On the one hand, a complex sentence can be studied as a mini-unit of discourse consisting of two or more elements describing events, situations, or processes, with its own internal information-structural and referential organization. On the other hand, complex sentences can be studied as parts of larger discourse structures, such as narratives or conversations, in terms of how their information-structural characteristics relate to this wider context. The book offers new perspectives for the study of the interaction between complex sentences and information management, and moreover adds typological breadth by focusing on lesser studied languages from several parts of the world.
  • Van Putten, S. (2014). Information structure in Avatime. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van de Velde, M., Meyer, A. S., & Konopka, A. E. (2014). Message formulation and structural assembly: Describing "easy" and "hard" events with preferred and dispreferred syntactic structures. Journal of Memory and Language, 71(1), 124-144. doi:10.1016/j.jml.2013.11.001.

    Abstract

    When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and sentence formulation, two experiments evaluated differences in formulation for sentences describing “easy” and “hard” events (more codable and less codable events) with preferred and dispreferred structures (actives and passives). Experiment 1 employed a subliminal cuing manipulation and a cumulative priming manipulation to increase production of passive sentences. Experiment 2 examined the influence of event codability on formulation without a cuing manipulation. In both experiments, speakers showed an early preference for looking at the agent of the event when constructing active sentences. This preference was attenuated by event codability, suggesting that speakers were less likely to prioritize encoding of a single character at the outset of formulation in “easy” events than in “harder” events. Accessibility of the agent influenced formulation primarily when an event was “harder” to describe. Formulation of passive sentences in Experiment 1 also began with early fixations to the agent but changed with exposure to passive syntax: speakers were more likely to consider the patient as a suitable sentential starting point after cumulative priming. The results show that the message-to-language mapping in production can vary with the ease of encoding an event structure and of generating a suitable linguistic structure.
  • Van Putten, S. (2014). Left-dislocation and subordination in Avatime (Kwa). In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (Eds.), Information Structure and Reference Tracking in Complex Sentences. (pp. 71-98). Amsterdam: John Benjamins.

    Abstract

    Left dislocation is characterized by a sentence-initial element which is crossreferenced in the remainder of the sentence, and often set off by an intonation break. Because of these properties, left dislocation has been analyzed as an extraclausal phenomenon. Whether or not left dislocation can occur within subordinate clauses has been a matter of debate in the literature, but has never been checked against corpus data. This paper presents data from Avatime, a Kwa (Niger-Congo) language spoken in Ghana, showing that left dislocation occurs within subordinate clauses in spontaneous discourse. This poses a problem for the extraclausal analysis of left dislocation. I show that this problem can best be solved by assuming that Avatime allows the embedding of units larger than a clause
  • Van der Zande, P., Jesse, A., & Cutler, A. (2014). Hearing words helps seeing words: A cross-modal word repetition effect. Speech Communication, 59, 31-43. doi:10.1016/j.specom.2014.01.001.

    Abstract

    Watching a speaker say words benefits subsequent auditory recognition of the same words. In this study, we tested whether hearing words also facilitates subsequent phonological processing from visual speech, and if so, whether speaker repetition influences the magnitude of this word repetition priming. We used long-term cross-modal repetition priming as a means to investigate the underlying lexical representations involved in listening to and seeing speech. In Experiment 1, listeners identified auditory-only words during exposure and visual-only words at test. Words at test were repeated or new and produced by the exposure speaker or a novel speaker. Results showed a significant effect of cross-modal word repetition priming but this was unaffected by speaker changes. Experiment 2 added an explicit recognition task at test. Listeners’ lipreading performance was again improved by prior exposure to auditory words. Explicit recognition memory was poor, and neither word repetition nor speaker repetition improved it. This suggests that cross-modal repetition priming is neither mediated by explicit memory nor improved by speaker information. Our results suggest that phonological representations in the lexicon are shared across auditory and visual processing, and that speaker information is not transferred across modalities at the lexical level.
  • Van de Velde, M., & Meyer, A. S. (2014). Syntactic flexibility and planning scope: The effect of verb bias on advance planning during sentence recall. Frontiers in Psychology, 5: 1174. doi:10.3389/fpsyg.2014.01174.

    Abstract

    In sentence production, grammatical advance planning scope depends on contextual factors (e.g., time pressure), linguistic factors (e.g., ease of structural processing), and cognitive factors (e.g., production speed). The present study tests the influence of the availability of multiple syntactic alternatives (i.e., syntactic flexibility) on the scope of advance planning during the recall of Dutch dative phrases. We manipulated syntactic flexibility by using verbs with a strong bias or a weak bias toward one structural alternative in sentence frames accepting both verbs (e.g., strong/weak bias: De ober schotelt/serveert de klant de maaltijd [voor] “The waiter dishes out/serves the customer the meal”). To assess lexical planning scope, we varied the frequency of the first post-verbal noun (N1, Experiment 1) or the second post-verbal noun (N2, Experiment 2). In each experiment, 36 speakers produced the verb phrases in a rapid serial visual presentation (RSVP) paradigm. On each trial, they read a sentence presented one word at a time, performed a short distractor task, and then saw a sentence preamble (e.g., De ober…) which they had to complete to form the presented sentence. Onset latencies were compared using linear mixed effects models. N1 frequency did not produce any effects. N2 frequency only affected sentence onsets in the weak verb bias condition and especially in slow speakers. These findings highlight the dependency of planning scope during sentence recall on the grammatical properties of the verb and the frequency of post-verbal nouns. Implications for utterance planning in everyday speech are discussed.
  • Van Rijswijk, R., & Muntendam, A. (2014). The prosody of focus in the Spanish of Quechua-Spanish bilinguals: A case study on noun phrases. International Journal of Bilingualism, 18(6), 614-632. doi:10.1177/1367006912456103.

    Abstract

    This study examines the prosody of focus in the Spanish of 16 Quechua-Spanish bilinguals near Cusco, Peru. Data come from a dialogue game that involved noun phrases consisting of a noun and an adjective. The questions in the game elicited broad focus, contrastive focus on the noun (non-final position) and contrastive focus on the adjective (final position). The phonetic analysis in Praat included peak alignment, peak height, local range and duration of the stressed syllable and word. The study revealed that Cusco Spanish differs from other Spanish varieties. In other Spanish varieties, contrastive focus is marked by early peak alignment, whereas broad focus involves a late peak on the non-final word. Furthermore, in other Spanish varieties contrastive focus is indicated by a higher F0 maximum, a wider local range, post-focal pitch reduction and a longer duration of the stressed syllable/word. For Cusco Spanish no phonological contrast between early and late peak alignment was found. However, peak alignment on the adjective in contrastive focus was significantly earlier than in the two other contexts. For women, similar results were found for the noun in contrastive focus. An additional prominence-lending feature marking contrastive focus concerned duration of the final word. Furthermore, the results revealed a higher F0 maximum for broad focus than for contrastive focus. The findings suggest a prosodic change, which is possibly due to contact with Quechua. The study contributes to research on information structure, prosody and contact-induced language change.
  • Veenstra, A., Acheson, D. J., Bock, K., & Meyer, A. S. (2014). Effects of semantic integration on subject–verb agreement: Evidence from Dutch. Language, Cognition and Neuroscience, 29(3), 355-380. doi:10.1080/01690965.2013.862284.

    Abstract

    The generation of subject–verb agreement is a central component of grammatical encoding. It is sensitive to conceptual and grammatical influences, but the interplay between these factors is still not fully understood. We investigate how semantic integration of the subject noun phrase (‘the secretary of/with the governor’) and the Local Noun Number (‘the secretary with the governor/governors’) affect the ease of selecting the verb form. Two hypotheses are assessed: according to the notional hypothesis, integration encourages the assignment of the singular notional number to the noun phrase and facilitates the choice of the singular verb form. According to the lexical interference hypothesis, integration strengthens the competition between nouns within the subject phrase, making it harder to select the verb form when the nouns mismatch in number. In two experiments, adult speakers of Dutch completed spoken preambles (Experiment 1) or selected appropriate verb forms (Experiment 2). Results showed facilitatory effects of semantic integration (fewer errors and faster responses with increasing integration). These effects did not interact with the effects of the Local Noun Number (slower response times and higher error rates for mismatching than for matching noun numbers). The findings thus support the notional hypothesis and a model of agreement where conceptual and lexical factors independently contribute to the determination of the number of the subject noun phrase and, ultimately, the verb.
  • Veenstra, A. (2014). Semantic and syntactic constraints on the production of subject-verb agreement. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Verkerk, A. (2014). Diachronic change in Indo-European motion event encoding. Journal of Historical Linguistics, 4, 40-83. doi:10.1075/jhl.4.1.02ver.

    Abstract

    There are many different syntactic constructions that languages can use to encode motion events. In recent decades, great advances have been made in the description and study of these syntactic constructions from languages spoken around the world (Talmy 1985, 1991, Slobin 1996, 2004). However, relatively little attention has been paid to historical change in these systems (exceptions are Vincent 1999, Dufresne, Dupuis & Tremblay 2003, Kopecka 2006 and Peyraube 2006). In this article, diachronic change of motion event encoding systems in Indo-European is investigated using the available historical–comparative data and phylogenetic comparative methods adopted from evolutionary biology. It is argued that Proto-Indo-European was not satellite-framed, as suggested by Talmy (2007) and Acedo Matellán and Mateu (2008), but had a mixed motion event encoding system, as is suggested by the available historical–comparative data
  • Verkerk, A. (2014). The correlation between motion event encoding and path verb lexicon size in the Indo-European language family. Folia Linguistica Historica, 35, 307-358. doi:10.1515/flih.2014.009.

    Abstract

    There have been opposing views on the possibility of a relationship between motion event encoding and the size of the path verb lexicon. Özçalışkan (2004) has proposed that verb-framed and satellite-framed languages should approximately have the same number of path verbs, whereas a review of some of the literature suggests that verb-framed languages typically have a bigger path verb lexicon than satelliteframed languages. In this article I demonstrate that evidence for this correlation can be found through phylogenetic comparative analysis of parallel corpus data from twenty Indo-European languages.
  • Verkerk, A. (2014). The evolutionary dynamics of motion event encoding. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Verkerk, A. (2014). Where Alice fell into: Motion events from a parallel corpus. In B. Szmrecsanyi, & B. Wälchli (Eds.), Aggregating dialectology, typology, and register analysis: Linguistic variation in text and speech (pp. 324-354). Berlin: De Gruyter.

Share this page