Displaying 101 - 126 of 126
  • Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.

    Abstract

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.
  • Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.

    Abstract

    Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism.
  • Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.

    Abstract

    The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family.
  • Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.

    Abstract

    During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing.
  • Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.
  • Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.

    Abstract

    Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion.
  • Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.

    Additional information

    full text via Radboud Repository
  • San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.

    Abstract

    To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception.
  • Schepens, J. (2015). Bridging linguistic gaps: The effects of linguistic distance on adult learnability of Dutch as an additional language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for
    addressees. Here, we investigate whether this ability is
    modulated by age. Younger and older adults participated in a
    comic narration task in which one participant (the speaker)
    narrated six short comic stories to another participant (the
    addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but
    not older speakers used more words and gestures when narrating novel story content as opposed to known content.
    We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.

    Abstract

    In different tasks involving action perception, performance has been found to be facilitated
    when the presented stimuli were produced by the participants themselves rather than by
    another participant. These results suggest that the same mental representations are
    accessed during both production and perception. However, with regard to spoken word perception,
    evidence also suggests that listeners’ representations for speech reflect the input
    from their surrounding linguistic community rather than their own idiosyncratic productions.
    Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners
    to frame their interpretations of incoming speech signals with regard to speaker identity.
    In order to determine whether word recognition evinces similar self-advantages as found in
    action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech
    representations accessed during perception of noise-vocoded speech are more reflective
    of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech.
  • Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Torreira, F., & Valtersson, E. (2015). Phonetic and visual cues to questionhood in French conversation. Phonetica, 72, 20-42. doi:10.1159/000381723.

    Abstract

    We investigate the extent to which French polar questions and continuation statements, two types of utterances with similar morphosyntactic and intonational forms but different pragmatic functions, can be distinguished in conversational data based on phonetic and visual bodily information. We show that the two utterance types can be distinguished well over chance level by automatic classification models including several phonetic and visual cues. We also show that a considerable amount of relevant phonetic and visual information is present before the last portion of the utterances, potentially assisting early speech act recognition by addressees. These findings indicate that bottom-up phonetic and visual cues may play an important role during the production and recognition of speech acts alongside top-down contextual information.
  • Tsuji, S., Mazuka, R., Cristia, A., & Fikkert, P. (2015). Even at 4 months, a labial is a good enough coronal, but not vice versa. Cognition, 134, 252-256. doi:10.1016/j.cognition.2014.10.009.

    Abstract

    Numerous studies have revealed an asymmetry tied to the perception of coronal place of articulation: participants accept a labial mispronunciation of a coronal target, but not vice versa. Whether or not this asymmetry is based on language-general properties or arises from language-specific experience has been a matter of debate. The current study suggests a bias of the first type by documenting an early, cross-linguistic asymmetry related to coronal place of articulation. Japanese and Dutch 4- and 6-month-old infants showed evidence of discrimination if they were habituated to a labial and then tested on a coronal sequence, but not vice versa. This finding has important implications for both phonological theories and infant speech perception research

    Additional information

    Tsuji_etal_suppl_2014.xlsx
  • Unsworth, S., Persson, L., Prins, T., & De Bot, K. (2015). An investigation of factors affecting early foreign language learning in the Netherlands. Applied Linguistics, 36(5), 527-548. doi:10.1093/applin/amt052.
  • Van de Velde, M., Kempen, G., & Harbusch, K. (2015). Dative alternation and planning scope in spoken language: A corpus study on effects of verb bias in VO and OV clauses of Dutch. Lingua, 165, 92-108. doi:10.1016/j.lingua.2015.07.006.

    Abstract

    The syntactic structure of main and subordinate clauses is determined to a considerable extent by verb biases. For example, some English and Dutch ditransitive verbs have a preference for the prepositional object dative, whereas others are typically used with the double object dative. In this study, we compare the effect of these biases on structure selection in (S)VO and (S)OV dative clauses in the Corpus of Spoken Dutch (CGN). This comparison allowed us to make inferences about the size of the advance planning scope during spontaneous speaking: If the verb is an obligatory component of clause-level advance planning scope, as is claimed by the hypothesis of hierarchical incrementality, then biases should exert their influence on structure choices, regardless of early (VO) or late (OV) position of the verb in the clause. Conversely, if planning proceeds in a piecemeal fashion, strictly guided by lexical availability, as claimed by linear incrementality, then the verb and its associated biases can only influence structure choices in VO sentences. We tested these predictions by analyzing structure choices in the CGN, using mixed logit models. Our results support a combination of linear and hierarchical incrementality, showing a significant influence of verb bias on structure choices in VO, and a weaker (but still significant) effect in OV clauses
  • Van de Velde, M. (2015). Incrementality and flexibility in sentence production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Rhijn, J. R., & Vernes, S. C. (2015). Retinoic acid signaling: A new piece in the spoken language puzzle. Frontiers in Psychology, 6: 1816. doi:10.3389/fpsyg.2015.01816.

    Abstract

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.
    FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output.
    We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken) language ready brain.
  • Verhees, M. W. F. T., Chwilla, D. J., Tromp, J., & Vissers, C. T. W. M. (2015). Contributions of emotional state and attention to the processing of syntactic agreement errors: evidence from P600. Frontiers in Psychology, 6: 388. doi:10.3389%2Ffpsyg.2015.00388.

    Abstract

    The classic account of language is that language processing occurs in isolation from other cognitive systems, like perception, motor action, and emotion. The central theme of this paper is the relationship between a participant’s emotional state and language comprehension. Does emotional context affect how we process neutral words? Recent studies showed that processing of word meaning – traditionally conceived as an automatic process – is affected by emotional state. The influence of emotional state on syntactic processing is less clear. One study reported a mood-related P600 modulation, while another study did not observe an effect of mood on syntactic processing. The goals of this study were: First, to clarify whether and if so how mood affects syntactic processing. Second, to shed light on the underlying mechanisms by separating possible effects of mood from those of attention on syntactic processing. Event-related potentials (ERPs) were recorded while participants read syntactically correct or incorrect sentences. Mood (happy vs. sad) was manipulated by presenting film clips. Attention was manipulated by directing attention to syntactic features vs. physical features. The mood induction was effective. Interactions between mood, attention and syntactic correctness were obtained, showing that mood and attention modulated P600. The mood manipulation led to a reduction in P600 for sad as compared to happy mood when attention was directed at syntactic features. The attention manipulation led to a reduction in P600 when attention was directed at physical features compared to syntactic features for happy mood. From this we draw two conclusions: First, emotional state does affect syntactic processing. We propose mood-related differences in the reliance on heuristics as the underlying mechanism. Second, attention can contribute to emotion-related ERP effects in syntactic language processing. Therefore, future studies on the relation between language and emotion will have to control for effects of attention
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2015). Syntactic predictability in the recognition of carefully and casually produced speech. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1684-1702. doi:10.1037/a0039326.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2015). Automaticity and stability of adaptation to foreign-accented speech. Language and Speech, 52(2), 168-189. doi:10.1177/0023830914528102.

    Abstract

    In three cross-modal priming experiments we asked whether adaptation to a foreign-accented speaker is automatic, and whether adaptation can be seen after a long delay between initial exposure and test. Dutch listeners were exposed to a Hebrew-accented Dutch speaker with two types of Dutch words: those that contained [ɪ] (globally accented words), and those in which the Dutch [i] was shortened to [ɪ] (specific accent marker words). Experiment 1, which served as a baseline, showed that native Dutch participants showed facilitatory priming for globally accented, but not specific accent, words. In experiment 2, participants performed a 3.5-minute phoneme monitoring task, and were tested on their comprehension of the accented speaker 24 hours later using the same cross-modal priming task as in experiment 1. During the phoneme monitoring task, listeners were asked to detect a consonant that was not strongly accented. In experiment 3, the delay between exposure and test was extended to 1 week. Listeners in experiments 2 and 3 showed facilitatory priming for both globally accented and specific accent marker words. Together, these results show that adaptation to a foreign-accented speaker can be rapid and automatic, and can be observed after a prolonged delay in testing.
  • Zhou, W. (2015). Assessing birth language memory in young adoptees. PhD Thesis, Radboud University Nijmegen, Nijmegen.

Share this page