Displaying 501 - 600 of 804
-
Noordman, L. G. M., & Vonk, W. (2015). Inferences in Discourse, Psychology of. In J. D. Wright (
Ed. ), International Encyclopedia of the Social & Behavioral Sciences (2nd ed.) Vol. 12 (pp. 37-44). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57012-3.Abstract
An inference is defined as the information that is not expressed explicitly by the text but is derived on the basis of the understander's knowledge and is encoded in the mental representation of the text. Inferencing is considered as a central component in discourse understanding. Experimental methods to detect inferences, established findings, and some developments are reviewed. Attention is paid to the relation between inference processes and the brain. -
Norcliffe, E., Harris, A., & Jaeger, T. F. (2015). Cross-linguistic psycholinguistics and its critical role in theory development: early beginnings and recent advances. Language, Cognition and Neuroscience, 30(9), 1009-1032. doi:10.1080/23273798.2015.1080373.
Abstract
Recent years have seen a small but growing body of psycholinguistic research focused on typologically diverse languages. This represents an important development for the field, where theorising is still largely guided by the often implicit assumption of universality. This paper introduces a special issue of Language, Cognition and Neuroscience devoted to the topic of cross-linguistic and field-based approaches to the study of psycholinguistics. The papers in this issue draw on data from a variety of genetically and areally divergent languages, to address questions in the production and comprehension of phonology, morphology, words, and sentences. To contextualise these studies, we provide an overview of the field of cross-linguistic psycholinguistics, from its early beginnings to the present day, highlighting instances where cross-linguistic data have significantly contributed to psycholinguistic theorising. -
Norcliffe, E., & Jaeger, T. F. (2016). Predicting head-marking variability in Yucatec Maya relative clause production. Language and Cognition, 8(2), 167-205. doi:10.1017/langcog.2014.39.
Abstract
Recent proposals hold that the cognitive systems underlying language production exhibit computational properties that facilitate communicative efficiency, i.e., an efficient trade-off between production ease and robust information transmission. We contribute to the cross-linguistic evaluation of the communicative efficiency hypothesis by investigating speakers’ preferences in the production of a typologically rare head-marking alternation that occurs in relative clause constructions in Yucatec Maya. In a sentence recall study, we find that speakers of Yucatec Maya prefer to use reduced forms of relative clause verbs when the relative clause is more contextually expected. This result is consistent with communicative efficiency and thus supports its typological generalizability. We compare two types of cue to the presence of a relative clause, pragmatic cues previously investigated in other languages and a highly predictive morphosyntactic cue specific to Yucatec. We find that Yucatec speakers’ preferences for a reduced verb form are primarily conditioned on the more informative cue. This demonstrates the role of both general principles of language production and their language-specific realizations. -
Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.
Abstract
The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation. -
Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.Abstract
To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons. -
Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4-18. doi:10.1080/23273798.2015.1081703.
Abstract
Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models. -
Okbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G. and 236 moreOkbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G., Brandsma, J., Pina Concas, M., Derringer, J., Furlotte, N. A., Galesloot, T. E., Girotto, G., Gupta, R., Hall, L. M., Harris, S. E., Hofer, E., Horikoshi, M., Huffman, J. E., Kaasik, K., Kalafati, I. P., Karlsson, R., Kong, A., Lahti, J., Lee, S. J. V. D., DeLeeuw, C., Lind, P. A., Lindgren, K.-.-O., Liu, T., Mangino, M., Marten, J., Mihailov, E., Miller, M. B., Van der Most, P. J., Oldmeadow, C., Payton, A., Pervjakova, N., Peyrot, W. J., Qian, Y., Raitakari, O., Rueedi, R., Salvi, E., Schmidt, B., Schraut, K. E., Shi, J., Smith, A. V., Poot, R. A., St Pourcain, B., Teumer, A., Thorleifsson, G., Verweij, N., Vuckovic, D., Wellmann, J., Westra, H.-.-J., Yang, J., Zhao, W., Zhu, Z., Alizadeh, B. Z., Amin, N., Bakshi, A., Baumeister, S. E., Biino, G., Bønnelykke, K., Boyle, P. A., Campbell, H., Cappuccio, F. P., Davies, G., De Neve, J.-.-E., Deloukas, P., Demuth, I., Ding, J., Eibich, P., Eisele, L., Eklund, N., Evans, D. M., Faul, J. D., Feitosa, M. F., Forstner, A. J., Gandin, I., Gunnarsson, B., Halldórsson, B. V., Harris, T. B., Heath, A. C., Hocking, L. J., Holliday, E. G., Homuth, G., Horan, M. A., Hottenga, J.-.-J., De Jager, P. L., Joshi, P. K., Jugessur, A., Kaakinen, M. A., Kähönen, M., Kanoni, S., Keltigangas-Järvinen, L., Kiemeney, L. A. L. M., Kolcic, I., Koskinen, S., Kraja, A. T., Kroh, M., Kutalik, Z., Latvala, A., Launer, L. J., Lebreton, M. P., Levinson, D. F., Lichtenstein, P., Lichtner, P., Liewald, D. C. M., Cohert Study, L., Loukola, A., Madden, P. A., Mägi, R., Mäki-Opas, T., Marioni, R. E., Marques-Vidal, P., Meddens, G. A., McMahon, G., Meisinger, C., Meitinger, T., Milaneschi, Y., Milani, L., Montgomery, G. W., Myhre, R., Nelson, C. P., Nyholt, D. R., Ollier, W. E. R., Palotie, A., Paternoster, L., Pedersen, N. L., Petrovic, K. E., Porteous, D. J., Räikkönen, K., Ring, S. M., Robino, A., Rostapshova, O., Rudan, I., Rustichini, A., Salomaa, V., Sanders, A. R., Sarin, A.-.-P., Schmidt, H., Scott, R. J., Smith, B. H., Smith, J. A., Staessen, J. A., Steinhagen-Thiessen, E., Strauch, K., Terracciano, A., Tobin, M. D., Ulivi, S., Vaccargiu, S., Quaye, L., Van Rooij, F. J. A., Venturini, C., Vinkhuyzen, A. A. E., Völker, U., Völzke, H., Vonk, J. M., Vozzi, D., Waage, J., Ware, E. B., Willemsen, G., Attia, J. R., Bennett, D. A., Berger, K., Bertram, L., Bisgaard, H., Boomsma, D. I., Borecki, I. B., Bültmann, U., Chabris, C. F., Cucca, F., Cusi, D., Deary, I. J., Dedoussis, G. V., Van Duijn, C. M., Eriksson, J. G., Franke, B., Franke, L., Gasparini, P., Gejman, P. V., Gieger, C., Grabe, H.-.-J., Gratten, J., Groenen, P. J. F., Gudnason, V., Van der Harst, P., Hayward, C., Hinds, D. A., Hoffmann, W., Hyppönen, E., Iacono, W. G., Jacobsson, B., Järvelin, M.-.-R., Jöckel, K.-.-H., Kaprio, J., Kardia, S. L. R., Lehtimäki, T., Lehrer, S. F., Magnusson, P. K. E., Martin, N. G., McGue, M., Metspalu, A., Pendleton, N., Penninx, B. W. J. H., Perola, M., Pirastu, N., Pirastu, M., Polasek, O., Posthuma, D., Power, C., Province, M. A., Samani, N. J., Schlessinger, D., Schmidt, R., Sørensen, T. I. A., Spector, T. D., Stefansson, K., Thorsteinsdottir, U., Thurik, A. R., Timpson, N. J., Tiemeier, H., Tung, J. Y., Uitterlinden, A. G., Vitart, V., Vollenweider, P., Weir, D. R., Wilson, J. F., Wright, A. F., Conley, D. C., Krueger, R. F., Davey Smith, G., Hofman, A., Laibson, D. I., Medland, S. E., Meyer, M. N., Yang, J., Johannesson, M., Visscher, P. M., Esko, T., Koellinger, P. D., Cesarini, D., & Benjamin, D. J. (2016). Genome-wide association study identifies 74 loci associated with educational attainment. Nature, 533, 539-542. doi:10.1038/nature17671.
Abstract
Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases -
O'Meara, C., & Majid, A. (2016). How changing lifestyles impact Seri smellscapes and smell language. Anthropological Linguistics, 58(2), 107-131. doi:10.1353/anl.2016.0024.
Abstract
The sense of smell has widely been viewed as inferior to the other senses. This is reflected in the lack of treatment of olfaction in ethnographies and linguistic descriptions. We present novel data
from the olfactory lexicon of Seri, a language isolate of Mexico, which sheds new light onto the possibilities for olfactory terminologies. We also present the Seri smellscape, highlighting the cultural significance of odors in Seri culture which, along with the olfactory language, is now
under threat as globalization takes hold and traditional ways of life are transformed. -
Orfanidou, E., McQueen, J. M., Adam, R., & Morgan, G. (2015). Segmentation of British Sign Language (BSL): Mind the gap! Quarterly journal of experimental psychology, 68, 641-663. doi:10.1080/17470218.2014.945467.
Abstract
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms -
Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds. ), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1182-1187). Austin, TX: Cognitive Science Society.Abstract
There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of referents (Müller, 2013). Beyond different taxonomies describing the modes of representation, it remains unclear what factors motivate certain depicting techniques over others. Results from a pantomime generation task show that pantomimes are not entirely idiosyncratic but rather follow generalisable patterns constrained by their semantic category. We show that a) specific modes of representations are preferred for certain objects (acting for manipulable objects and drawing for non-manipulable objects); and b) that use and ordering of deictics and modes of representation operate in tandem to distinguish between semantically related concepts (e.g., “to drink” vs “mug”). This study provides yet more evidence that our ability to communicate through silent gesture reveals systematic ways to describe events and objects around usAdditional information
https://mindmodeling.org/cogsci2016/papers/0212/index.html -
Ortega, G., & Morgan, G. (2015). Input processing at first exposure to a sign language. Second language research, 19(10), 443-463. doi:10.1177/0267658315576822.
Abstract
There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition -
Ortega, G. (2016). Language acquisition and development. In G. Gertz (
Ed. ), The SAGE Deaf Studies Encyclopedia. Vol. 3 (pp. 547-551). London: SAGE Publications Inc. -
Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity. Language Learning, 65(3), 660-668. doi:10.1111/lang.12123.
Abstract
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents -
Ortega, G., & Morgan, G. (2015). The effect of sign iconicity in the mental lexicon of hearing non-signers and proficient signers: Evidence of cross-modal priming. Language, Cognition and Neuroscience, 30(5), 574-585. doi:10.1080/23273798.2014.959533.
Abstract
The present study investigated the priming effect of iconic signs in the mental lexicon of hearing adults. Non-signers and proficient British Sign Language (BSL) users took part in a cross-modal lexical decision task. The results indicate that iconic signs activated semantically related words in non-signers' lexicon. Activation occurred regardless of the type of referent because signs depicting actions and perceptual features of an object yielded the same response times. The pattern of activation was different in proficient signers because only action signs led to cross-modal activation. We suggest that non-signers process iconicity in signs in the same way as they do gestures, but after acquiring a sign language, there is a shift in the mechanisms used to process iconic manual structures -
Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.
Abstract
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages. -
Pappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E. and 23 morePappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E., Power, C., Raitakari, O. T., Ring, S. M., Rivadeneira, F., Rodriguez, A., Scheet, P. A., Seppälä, I., Snieder, H., Standl, M., Thiering, E., Timpson, N. J., Veenstra, R., Velders, F. P., Whitehouse, A. J. O., Smith, G. D., Heinrich, J., Hypponen, E., Lehtimäki, T., Middeldorp, C. M., Oldehinkel, A. J., Pennell, C. E., Boomsma, D. I., & Tiemeier, H. (2016). A genome-wide approach to children's aggressive behavior: The EAGLE consortium. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 171(5), 562-572. doi:10.1002/ajmg.b.32333.
Abstract
Individual differences in aggressive behavior emerge in early childhood and predict persisting behavioral problems and disorders. Studies of antisocial and severe aggression in adulthood indicate substantial underlying biology. However, little attention has been given to genome-wide approaches of aggressive behavior in children. We analyzed data from nine population-based studies and assessed aggressive behavior using well-validated parent-reported questionnaires. This is the largest sample exploring children's aggressive behavior to date (N = 18,988), with measures in two developmental stages (N = 15,668 early childhood and N = 16,311 middle childhood/early adolescence). First, we estimated the additive genetic variance of children's aggressive behavior based on genome-wide SNP information, using genome-wide complex trait analysis (GCTA). Second, genetic associations within each study were assessed using a quasi-Poisson regression approach, capturing the highly right-skewed distribution of aggressive behavior. Third, we performed meta-analyses of genome-wide associations for both the total age-mixed sample and the two developmental stages. Finally, we performed a gene-based test using the summary statistics of the total sample. GCTA quantified variance tagged by common SNPs (10–54%). The meta-analysis of the total sample identified one region in chromosome 2 (2p12) at near genome-wide significance (top SNP rs11126630, P = 5.30 × 10−8). The separate meta-analyses of the two developmental stages revealed suggestive evidence of association at the same locus. The gene-based analysis indicated association of variation within AVPR1A with aggressive behavior. We conclude that common variants at 2p12 show suggestive evidence for association with childhood aggression. Replication of these initial findings is needed, and further studies should clarify its biological meaning.Additional information
http://onlinelibrary.wiley.com/store/10.1002/ajmg.b.32333/asset/supinfo/ajmgb32… -
Peeters, D. (2015). A social and neurobiological approach to pointing in speech and gesture. PhD Thesis, Radboud University, Nijmegen.
Additional information
full text via Radboud Repository -
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.
Abstract
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction. -
Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.
Abstract
A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity. -
Peeters, D. (2016). Processing consequences of onomatopoeic iconicity in spoken language comprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds. ), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1632-1647). Austin, TX: Cognitive Science Society.Abstract
Iconicity is a fundamental feature of human language. However its processing consequences at the behavioral and neural level in spoken word comprehension are not well understood. The current paper presents the behavioral and electrophysiological outcome of an auditory lexical decision task in which native speakers of Dutch listened to onomatopoeic words and matched control words while their electroencephalogram was recorded. Behaviorally, onomatopoeic words were processed as quickly and accurately as words with an arbitrary mapping between form and meaning. Event-related potentials time-locked to word onset revealed a significant decrease in negative amplitude in the N2 and N400 components and a late positivity for onomatopoeic words in comparison to the control words. These findings advance our understanding of the temporal dynamics of iconic form-meaning mapping in spoken word comprehension and suggest interplay between the neural representations of real-world sounds and spoken words.Additional information
https://mindmodeling.org/cogsci2016/papers/0288/index.html -
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (
Eds. ), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.Abstract
Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
that subserve the integration of pointing gestures and speech in visual contexts in comprehension
are unclear. Here we present the results of an fMRI study in which participants watched images of an
actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
semantically integrates information across modalities and semiotic domains. -
Peeters, D., & Ozyurek, A. (2016). This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology, 7: 222. doi:10.3389/fpsyg.2016.00222.
-
Perlman, M., Paul, J., & Lupyan, G. (2015). Congenitally deaf children generate iconic vocalizations to communicate magnitude. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (
Eds. ), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.Abstract
From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size “frequency code”, but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence. -
Perlman, M., Clark, N., & Falck, M. J. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348-1368. doi:10.1111/cogs.12190.
Abstract
Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size-related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech. -
Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society Open Science, 2: 150152. doi:10.1098/rsos.150152.
Abstract
Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative ‘vocal’ charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic ‘meaning template’ we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems. -
Perlman, M., & Clark, N. (2015). Learned vocal and breathing behavior in an enculturated gorilla. Animal Cognition, 18(5), 1165-1179. doi:10.1007/s10071-015-0889-6.
Abstract
We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes. -
Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.
Abstract
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language -
Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.
Abstract
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. -
Perniss, P. M., Ozyurek, A., & Morgan, G. (
Eds. ). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113. -
Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.
Abstract
Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture -
Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLoS One, 10(9): e0137147. doi:10.1371/journal.pone.0137147.
Abstract
Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.Additional information
1536057.zip -
Perry, L., Perlman, M., & Lupyan, G. (2015). Iconicity in English vocabulary and its relation to toddlers’ word learning. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (
Eds. ), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.Abstract
Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought. -
Peter, M., Chang, F., Pine, J. M., Blything, R., & Rowland, C. F. (2015). When and how do children develop knowledge of verb argument structure? Evidence from verb bias effects in a structural priming task. Journal of Memory and Language, 81, 1-15. doi:10.1016/j.jml.2014.12.002.
Abstract
In this study, we investigated when children develop adult-like verb–structure links, and examined two mechanisms, associative and error-based learning, that might explain how these verb–structure links are learned. Using structural priming, we tested children’s and adults’ ability to use verb–structure links in production in three ways; by manipulating: (1) verb overlap between prime and target, (2) target verb bias, and (3) prime verb bias. Children (aged 3–4 and 5–6 years old) and adults heard and produced double object dative (DOD) and prepositional object dative (PD) primes with DOD- and PD-biased verbs. Although all age groups showed significant evidence of structural priming, only adults showed increased priming when there was verb overlap between prime and target sentences (the lexical boost). The effect of target verb bias also grew with development. Critically, however, the effect of prime verb bias on the size of the priming effect (prime surprisal) was larger in children than in adults, suggesting that verb–structure links are present at the earliest age tested. Taken as a whole, the results suggest that children begin to acquire knowledge about verb-argument structure preferences early in acquisition, but that the ability to use adult-like verb bias in production gradually improves over development. We also argue that this pattern of results is best explained by a learning model that uses an error-based learning mechanism. -
Petras, K., Ten Oever, S., & Jansma, B. M. (2016). The effect of distance on moral engagement: Event related potentials and alpha power are sensitive to perspective in a virtual shooting task. Frontiers in Psychology, 6: 2008. doi:10.3389/fpsyg.2015.02008.
Abstract
In a shooting video game we investigated whether increased distance reduces moral conflict. We measured and analyzed the event related potential (ERP), including the N2 component, which has previously been linked to cognitive conflict from competing decision tendencies. In a modified Go/No-go task designed to trigger moral conflict participants had to shoot suddenly appearing human like avatars in a virtual reality scene. The scene was seen either from an ego perspective with targets appearing directly in front of the participant or from a bird's view, where targets were seen from above and more distant. To control for low level visual features, we added a visually identical control condition, where the instruction to shoot was replaced by an instruction to detect. ERP waveforms showed differences between the two tasks as early as in the N1 time-range, with higher N1 amplitudes for the close perspective in the shoot task. Additionally, we found that pre-stimulus alpha power was significantly decreased in the ego, compared to the bird's view only for the shoot but not for the detect task. In the N2 time window, we observed main amplitude effects for response (No-go > Go) and distance (ego > bird perspective) but no interaction with task type (shoot vs. detect). We argue that the pre-stimulus and N1 effects can be explained by reduced attention and arousal in the distance condition when people are instructed to shoot. These results indicate a reduced moral engagement for increased distance. The lack of interaction in the N2 across tasks suggests that at that time point response execution dominates. We discuss potential implications for real life shooting situations, especially considering recent developments in drone shootings which are per definition of a distant view. -
Pettigrew, K. A., Fajutrao Valles, S. F., Moll, K., Northstone, K., Ring, S., Pennell, C., Wang, C., Leavett, R., Hayiou-Thomas, M. E., Thompson, P., Simpson, N. H., Fisher, S. E., The SLI Consortium, Whitehouse, A. J., Snowling, M. J., Newbury, D. F., & Paracchini, S. (2015). Lack of replication for the myosin-18B association with mathematical ability in independent cohorts. Genes, Brain and Behavior, 14(4), 369-376. doi:10.1111/gbb.12213.
Abstract
Twin studies indicate that dyscalculia (or mathematical disability) is caused partly by a genetic component, which is yet to be understood at the molecular level. Recently, a coding variant (rs133885) in the myosin-18B gene was shown to be associated with mathematical abilities with a specific effect among children with dyslexia. This association represents one of the most significant genetic associations reported to date for mathematical abilities and the only one reaching genome-wide statistical significance.
We conducted a replication study in different cohorts to assess the effect of rs133885 maths-related measures. The study was conducted primarily using the Avon Longitudinal Study of Parents and Children (ALSPAC), (N = 3819). We tested additional cohorts including the York Cohort, the Specific Language Impairment Consortium (SLIC) cohort and the Raine Cohort, and stratified them for a definition of dyslexia whenever possible.
We did not observe any associations between rs133885 in myosin-18B and mathematical abilities among individuals with dyslexia or in the general population. Our results suggest that the myosin-18B variant is unlikely to be a main factor contributing to mathematical abilities. -
Piai, V., Roelofs, A., Rommers, J., & Maris, E. (2015). Beta oscillations reflect memory and motor aspects of spoken word production. Human brain mapping, 36(7), 2767-2780. doi:10.1002/hbm.22806.
Abstract
Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band.Additional information
hbm22806-sup-0001-suppinfo1.docxFiles private
Request files -
Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.
Abstract
Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill. -
Plomp, G., Hervais-Adelman, A., Astolfi, L., & Michel, C. M. (2015). Early recurrence and ongoing parietal driving during elementary visual processing. Scientific Reports, 5: 18733. doi:10.1038/srep18733.
Abstract
Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way, recurrent interactions to occur at short latencies after stimulus onset. Here we combined EEG source-imaging and Granger-causal modeling with high temporal resolution to investigate whether recurrent and top-down interactions between visual and attentional brain areas can be identified and distinguished at short latencies in humans. We investigated the directed interactions between widespread occipital, parietal and frontal areas that we localized within participants using fMRI. The connectivity results showed two-way interactions between area MT and V1 already at short latencies. In addition, the results suggested a large role for lateral parietal cortex in coordinating visual activity that may be understood as an ongoing top-down allocation of attentional resources. Our results support the notion that indirect pathways allow early, evoked driving from MT to V1 to highlight spatial locations of motion transients, while influence from parietal areas is continuously exerted around stimulus onset, presumably reflecting task-related attentional processes. -
Poletiek, F. H., & Olfers, K. J. F. (2016). Authentication by the crowd: How lay students identify the style of a 17th century artist. CODART e-Zine, 8. Retrieved from http://ezine.codart.nl/17/issue/57/artikel/19-21-june-madrid/?id=349#!/page/3.
-
Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.
Abstract
Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system. -
Poort, E. D., Warren, J. E., & Rodd, J. M. (2016). Recent experience with cognates and interlingual homographs in one language affects subsequent processing in another language. Bilingualism: Language and Cognition, 19(1), 206-212. doi:10.1017/S1366728915000395.
Abstract
This experiment shows that recent experience in one language influences subsequent processing of the same word-forms in a different language. Dutch–English bilinguals read Dutch sentences containing Dutch–English cognates and interlingual homographs, which were presented again 16 minutes later in isolation in an English lexical decision task. Priming produced faster responses for the cognates but slower responses for the interlingual homographs. These results show that language switching can influence bilingual speakers at the level of individual words, and require models of bilingual word recognition (e.g., BIA+) to allow access to word meanings to be modulated by recent experience. -
St Pourcain, B., Haworth, C. M. A., Davis, O. S. P., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ronald, A., Price, T., Meaburn, E., Ring, S. M., Golding, J., Hakonarson, H., Plomin, R., & Davey Smith, G. (2015). Heritability and genome-wide analyses of problematic peer relationships during childhood and adolescence. Human Genetics, 134(6), 539-551. doi:10.1007/s00439-014-1514-5.
Abstract
Peer behaviour plays an important role in the development of social adjustment, though little is known about its genetic architecture. We conducted a twin study combined with a genome-wide complex trait analysis (GCTA) and a genome-wide screen to characterise genetic influences on problematic peer behaviour during childhood and adolescence. This included a series of longitudinal measures (parent-reported Strengths-and-Difficulties Questionnaire) from a UK population-based birth-cohort (ALSPAC, 4–17 years), and a UK twin sample (TEDS, 4–11 years). Longitudinal twin analysis (TEDS; N ≤ 7,366 twin pairs) showed that peer problems in childhood are heritable (4–11 years, 0.60 < twin-h 2 ≤ 0.71) but genetically heterogeneous from age to age (4–11 years, twin-r g = 0.30). GCTA (ALSPAC: N ≤ 5,608, TEDS: N ≤ 2,691) provided furthermore little support for the contribution of measured common genetic variants during childhood (4–12 years, 0.02 < GCTA-h 2(Meta) ≤ 0.11) though these influences become stronger in adolescence (13–17 years, 0.14 < GCTA-h 2(ALSPAC) ≤ 0.27). A subsequent cross-sectional genome-wide screen in ALSPAC (N ≤ 6,000) focussed on peer problems with the highest GCTA-heritability (10, 13 and 17 years, 0.0002 < GCTA-P ≤ 0.03). Single variant signals (P ≤ 10−5) were followed up in TEDS (N ≤ 2835, 9 and 11 years) and, in search for autism quantitative trait loci, explored within two autism samples (AGRE: N Pedigrees = 793; ACC: N Cases = 1,453/N Controls = 7,070). There was, however, no evidence for association in TEDS and little evidence for an overlap with the autistic continuum. In summary, our findings suggest that problematic peer relationships are heritable but genetically complex and heterogeneous from age to age, with an increase in common measurable genetic variation during adolescence. -
Pouw, W., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Augmenting instructional animations with a body analogy to help children learn about physical systems. Frontiers in Psychology, 7: 860. doi:10.3389/fpsyg.2016.00860.
Abstract
We investigated whether augmenting instructional animations with a body analogy (BA) would improve 10- to 13-year-old children’s learning about class-1 levers. Children with a lower level of general math skill who learned with an instructional animation that provided a BA of the physical system, showed higher accuracy on a lever problem-solving reaction time task than children studying the instructional animation without this BA. Additionally, learning with a BA led to a higher speed–accuracy trade-off during the transfer task for children with a lower math skill, which provided additional evidence that especially this group is likely to be affected by learning with a BA. However, overall accuracy and solving speed on the transfer task was not affected by learning with or without this BA. These results suggest that providing children with a BA during animation study provides a stepping-stone for understanding mechanical principles of a physical system, which may prove useful for instructional designers. Yet, because the BA does not seem effective for all children, nor for all tasks, the degree of effectiveness of body analogies should be studied further. Future research, we conclude, should be more sensitive to the necessary degree of analogous mapping between the body and physical systems, and whether this mapping is effective for reasoning about more complex instantiations of such physical systems. -
Pouw, W., Eielts, C., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Does (non‐)meaningful sensori‐motor engagement promote learning with animated physical systems? Mind, Brain and Education, 10(2), 91-104. doi:10.1111/mbe.12105.
Abstract
Previous research indicates that sensori‐motor experience with physical systems can have a positive effect on learning. However, it is not clear whether this effect is caused by mere bodily engagement or the intrinsically meaningful information that such interaction affords in performing the learning task. We investigated (N = 74), through the use of a Wii Balance Board, whether different forms of physical engagement that was either meaningfully, non‐meaningfully, or minimally related to the learning content would be beneficial (or detrimental) to learning about the workings of seesaws from instructional animations. The results were inconclusive, indicating that motoric competency on lever problem solving did not significantly differ between conditions, nor were response speed and transfer performance affected. These findings suggest that adult's implicit and explicit knowledge about physical systems is stable and not easily affected by (contradictory) sensori‐motor experiences. Implications for embodied learning are discussed. -
Pouw, W., & Hostetter, A. B. (2016). Gesture as predictive action. Reti, Saperi, Linguaggi: Italian Journal of Cognitive Sciences, 3, 57-80. doi:10.12832/83918.
Abstract
Two broad approaches have dominated the literature on the production of speech-accompanying gestures. On the one hand, there are approaches that aim to explain the origin of gestures by specifying the mental processes that give rise to them. On the other, there are approaches that aim to explain the cognitive function that gestures have for the gesturer or the listener. In the present paper we aim to reconcile both approaches in one single perspective that is informed by a recent sea change in cognitive science, namely, Predictive Processing Perspectives (PPP; Clark 2013b; 2015). We start with the idea put forth by the Gesture as Simulated Action (GSA) framework (Hostetter, Alibali 2008). Under this view, the mental processes that give rise to gesture are re-enactments of sensori-motor experiences (i.e., simulated actions). We show that such anticipatory sensori-motor states and the constraints put forth by the GSA framework can be understood as top-down kinesthetic predictions that function in a broader predictive machinery as proposed by PPP. By establishing this alignment, we aim to show how gestures come to fulfill a genuine cognitive function above and beyond the mental processes that give rise to gesture. -
Pouw, W., Myrto-Foteini, M., Van Gog, T., & Paas, F. (2016). Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity. Cognitive Processing, 17, 269-277. doi:10.1007/s10339-016-0757-6.
Abstract
Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing. -
Pouw, W., & Looren de Jong, H. (2015). Rethinking situated and embodied social psychology. Theory and Psychology, 25(4), 411-433. doi:10.1177/0959354315585661.
Abstract
This article aims to explore the scope of a Situated and Embodied Social Psychology (ESP). At first sight, social cognition seems embodied cognition par excellence. Social cognition is first and foremost a supra-individual, interactive, and dynamic process (Semin & Smith, 2013). Radical approaches in Situated/Embodied Cognitive Science (Enactivism) claim that social cognition consists in an emergent pattern of interaction between a continuously coupled organism and the (social) environment; it rejects representationalist accounts of cognition (Hutto & Myin, 2013). However, mainstream ESP (Barsalou, 1999, 2008) still takes a rather representation-friendly approach that construes embodiment in terms of specific bodily formatted representations used (activated) in social cognition. We argue that mainstream ESP suffers from vestiges of theoretical solipsism, which may be resolved by going beyond internalistic spirit that haunts mainstream ESP today. -
Ramenzoni, V. C., & Liszkowski, U. (2016). The social reach: 8-month-olds reach for unobtainable objects in the presence of another person. Psychological Science, 27(9), 1278-1285. doi:10.1177/0956797616659938.
Abstract
Linguistic communication builds on prelinguistic communicative gestures, but the ontogenetic origins and complexities of these prelinguistic gestures are not well known. The current study tested whether 8-month-olds, who do not yet point communicatively, use instrumental actions for communicative purposes. In two experiments, infants reached for objects when another person was present and when no one else was present; the distance to the objects was varied. When alone, the infants reached for objects within their action boundaries and refrained from reaching for objects out of their action boundaries; thus, they knew about their individual action efficiency. However, when a parent (Experiment 1) or a less familiar person (Experiment 2) sat next to them, the infants selectively increased their reaching for out-of-reach objects. The findings reveal that before they communicate explicitly through pointing gestures, infants use instrumental actions with the apparent expectation that a partner will adopt and complete their goals. -
Ravignani, A., & Sonnweber, R. (2015). Measuring teaching through hormones and time series analysis: Towards a comparative framework. Behavioral and Brain Sciences, 38, 40-41. doi:10.1017/S0140525X14000806.
Abstract
In response to: How to learn about teaching: An evolutionary framework for the study of teaching behavior in humans and other animals
Arguments about the nature of teaching have depended principally on naturalistic observation and some experimental work. Additional measurement tools, and physiological variations and manipulations can provide insights on the intrinsic structure and state of the participants better than verbal descriptions alone: namely, time-series analysis, and examination of the role of hormones and neuromodulators on the behaviors of teacher and pupil. -
Ravignani, A., Westphal-Fitch, G., Aust, U., Schlumpp, M. M., & Fitch, W. T. (2015). More than one way to see it: Individual heuristics in avian visual computation. Cognition, 143, 13-24. doi:10.1016/j.cognition.2015.05.021.
Abstract
Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally.Additional information
Supplementary data -
Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.
Abstract
Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals1, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm2, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music1; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution. -
Ravignani, A. (2015). Evolving perceptual biases for antisynchrony: A form of temporal coordination beyond synchrony. Frontiers in Neuroscience, 9: 339. doi:10.3389/fnins.2015.00339.
-
Ravignani, A., & Cook, P. F. (2016). The evolutionary biology of dance without frills. Current Biology, 26(19), R878-R879. doi:10.1016/j.cub.2016.07.076.
Abstract
Recently psychologists have taken up the question of whether dance is reliant on unique human adaptations, or whether it is rooted in neural and cognitive mechanisms shared with other species 1, 2. In its full cultural complexity, human dance clearly has no direct analog in animal behavior. Most definitions of dance include the consistent production of movement sequences timed to an external rhythm. While not sufficient for dance, modes of auditory-motor timing, such as synchronization and entrainment, are experimentally tractable constructs that may be analyzed and compared between species. In an effort to assess the evolutionary precursors to entrainment and social features of human dance, Laland and colleagues [2] have suggested that dance may be an incidental byproduct of adaptations supporting vocal or motor imitation — referred to here as the ‘imitation and sequencing’ hypothesis. In support of this hypothesis, Laland and colleagues rely on four convergent lines of evidence drawn from behavioral and neurobiological research on dance behavior in humans and rhythmic behavior in other animals. Here, we propose a less cognitive, more parsimonious account for the evolution of dance. Our ‘timing and interaction’ hypothesis suggests that dance is scaffolded off of broadly conserved timing mechanisms allowing both cooperative and antagonistic social coordination.Additional information
Experimental Procedures and Two Tables -
Ravignani, A., Fitch, W. T., Hanke, F. D., Heinrich, T., Hurgitsch, B., Kotz, S. A., Scharff, C., Stoeger, A. S., & de Boer, B. (2016). What pinnipeds have to say about human speech, music, and the evolution of rhythm. Frontiers in Neuroscience, 10: 274. doi:10.3389/fnins.2016.00274.
Abstract
Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed. -
Raviv, L., & Arnon, I. (2016). The developmental trajectory of children's statistical learning abilities. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds. ), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 1469-1474). Austin, TX: Cognitive Science Society.Abstract
Infants, children and adults are capable of implicitly extracting regularities from their environment through statistical learning (SL). SL is present from early infancy and found across tasks and modalities, raising questions about the domain generality of SL. However, little is known about its’ developmental trajectory: Is SL fully developed capacity in infancy, or does it improve with age, like other cognitive skills? While SL is well established in infants and adults, only few studies have looked at SL across development with conflicting results: some find age-related improvements while others do not. Importantly, despite its postulated role in language learning, no study has examined the developmental trajectory of auditory SL throughout childhood. Here, we conduct a large-scale study of children's auditory SL across a wide age-range (5-12y, N=115). Results show that auditory SL does not change much across development. We discuss implications for modality-based differences in SL and for its role in language acquisition.Additional information
https://mindmodeling.org/cogsci2016/papers/0260/index.html -
Raviv, L., & Arnon, I. (2016). Language evolution in the lab: The case of child learners. In A. Papagrafou, D. Grodner, D. Mirman, & J. Trueswell (
Eds. ), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 1643-1648). Austin, TX: Cognitive Science Society.Abstract
Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date, no published study has demonstrated a similar emergence of linguistic structure in children. This gap is problematic given that languages are mainly learned by children and that adults may bring existing linguistic biases to the task. Here, we conduct a large-scale study of iterated language learning in both children and adults, using a novel, child-friendly paradigm. The results show that while children make more mistakes overall, their languages become more learnable and show learnability biases similar to those of adults. Child languages did not show a significant increase in linguistic structure over time, but consistent mappings between meanings and signals did emerge on many occasions, as found with adults. This provides the first demonstration that cultural transmission affects the languages children and adults produce similarly.Additional information
https://mindmodeling.org/cogsci2016/papers/0289/index.html -
Richter, N., Tiddeman, B., & Haun, D. (2016). Social Preference in Preschoolers: Effects of Morphological Self-Similarity and Familiarity. PLoS One, 11(1): e0145443. doi:10.1371/journal.pone.0145443.
Abstract
Adults prefer to interact with others that are similar to themselves. Even slight facial self-resemblance can elicit trust towards strangers. Here we investigate if preschoolers at the age of 5 years already use facial self-resemblance when they make social judgments about others. We found that, in the absence of any additional knowledge about prospective peers, children preferred those who look subtly like themselves over complete strangers. Thus, subtle morphological similarities trigger social preferences well before adulthood.Additional information
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0145443#sec014 -
Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.
Abstract
Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism. -
Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias. Journal of Language Evolution, 1(2), 163-167. doi:10.1093/jole/lzw009.
Abstract
The impact of introducing double-blind reviewing in the most recent Evolution of Language conference is assessed. The ranking of papers is compared between EvoLang 11 (double-blind review) and EvoLang 9 and 10 (single-blind review). Main effects were found for first author gender by conference. The results mirror some findings in the literature on the effects of double-blind review, suggesting that it helps reduce a bias against female authors.Additional information
SI.pdf -
Roberts, S. G. (2015). Commentary: Large-scale psychological differences within China explained by rice vs. wheat agriculture. Frontiers in Psychology, 6: 950. doi:10.3389/fpsyg.2015.00950.
Abstract
Talhelm et al. (2014) test the hypothesis that activities which require more intensive collaboration foster more collectivist cultures. They demonstrate that a measure of collectivism correlates with the proportion of cultivated land devoted to rice paddies, which require more work to grow and maintain than other grains. The data come from individual measures of provinces in China. While the data is analyzed carefully, one aspect that is not directly controlled for is the historical relations between these provinces. Spurious correlations can occur between cultural traits that are inherited from ancestor cultures or borrowed through contact, what is commonly known as Galton's problem (Roberts and Winters, 2013). Effectively, Talhelm et al. treat the measures of each province as independent samples, while in reality both farming practices (e.g., Renfrew, 1997; Diamond and Bellwood, 2003; Lee and Hasegawa, 2011) and cultural values (e.g., Currie et al., 2010; Bulbulia et al., 2013) can be inherited or borrowed. This means that the data may be composed of non-independent points, inflating the apparent correlation between rice growing and collectivism. The correlation between farming practices and collectivism may be robust, but this cannot be known without an empirical control for the relatedness of the samples. Talhelm et al. do discuss this problem in the supplementary materials of their paper. They acknowledge that a phylogenetic analysis could be used to control for relatedness, but that at the time of publication there were no genetic or linguistic trees of descent which are detailed enough to suit this purpose. In this commentary I would like to make two points. First, since the original publication, researchers have created new linguistic trees that can provide the needed resolution. For example, the Glottolog phylogeny (Hammarström et al., 2015) has at least three levels of classification for the relevant varieties, though this does not have branch lengths (see also “reference” trees produced in List et al., 2014). Another recently published phylogeny uses lexical data to construct a phylogenetic tree for many language varieties within China (List et al., 2014). In this commentary I use these lexical data to estimate cultural contact between different provinces, and test whether these measures explain variation in rice farming pracices. However, the second point is that Talhelm et al. focus on descent (vertical transmission), while it may be relevant to control for both descent and borrowing (horizontal transmission). In this case, all that is needed is some measure of cultural contact between groups, not necessarily a unified tree of descent. I use a second source of linguistic data to calculate simple distances between languages based directly on the lexicon. These distances reflect borrowing as well as descent. -
Roberts, S. G., Winters, J., & Chen, K. (2015). Future tense and economic decisions: Controlling for cultural evolution. PLoS One, 10(7): e0132145. doi:10.1371/journal.pone.0132145.
Abstract
A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies. -
Roberts, S. G., Everett, C., & Blasi, D. (2015). Exploring potential climate effects on the evolution of human sound systems. In H. Little (
Ed. ), Proceedings of the 18th International Congress of Phonetic Sciences [ICPhS 2015] Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 14-19). Glasgow: ICPHS.Abstract
We suggest that it is now possible to conduct research on a topic which might be called evolutionary geophonetics. The main question is how the climate influences the evolution of language. This involves biological adaptations to the climate that may affect biases in production and perception; cultural evolutionary adaptations of the sounds of a language to climatic conditions; and influences of the climate on language diversity and contact. We discuss these ideas with special reference to a recent hypothesis that lexical tone is not adaptive in dry climates (Everett, Blasi & Roberts, 2015). -
Roberts, S. G., Torreira, F., & Levinson, S. C. (2015). The effects of processing and sequence organisation on the timing of turn taking: A corpus study. Frontiers in Psychology, 6: 509. doi:10.3389/fpsyg.2015.00509.
Abstract
The timing of turn taking in conversation is extremely rapid given the cognitive demands on speakers to comprehend, plan and execute turns in real time. Findings from psycholinguistics predict that the timing of turn taking is influenced by demands on processing, such as word frequency or syntactic complexity. An alternative view comes from the field of conversation analysis, which predicts that the rules of turn-taking and sequence organization may dictate the variation in gap durations (e.g. the functional role of each turn in communication). In this paper, we estimate the role of these two different kinds of factors in determining the speed of turn-taking in conversation. We use the Switchboard corpus of English telephone conversation, already richly annotated for syntactic structure speech act sequences, and segmental alignment. To this we add further information including Floor Transfer Offset (the amount of time between the end of one turn and the beginning of the next), word frequency, concreteness, and surprisal values. We then apply a novel statistical framework ('random forests') to show that these two dimensions are interwoven together with indexical properties of the speakers as explanatory factors determining the speed of response. We conclude that an explanation of the of the timing of turn taking will require insights from both processing and sequence organisation.Additional information
http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00509/abstract -
Roberts, S. G., Cuskley, C., McCrohon, L., Barceló-Coblijn, L., Feher, O., & Verhoef, T. (
Eds. ). (2016). The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). doi:10.17617/2.2248195. -
Robinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A. and 2 moreRobinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A., Smith, G. D., & Daly, M. J. (2016). Genetic risk for autism spectrum disorders and neuropsychiatric variation in the general population. Nature Genetics, 48, 552-555. doi:10.1038/ng.3529.
Abstract
Almost all genetic risk factors for autism spectrum disorders (ASDs) can be found in the general population, but the effects of this risk are unclear in people not ascertained for neuropsychiatric symptoms. Using several large ASD consortium and population-based resources (total n > 38,000), we find genome-wide genetic links between ASDs and typical variation in social behavior and adaptive functioning. This finding is evidenced through both LD score correlation and de novo variant analysis, indicating that multiple types of genetic risk for ASDs influence a continuum of behavioral and developmental traits, the severe tail of which can result in diagnosis with an ASD or other neuropsychiatric disorder. A continuum model should inform the design and interpretation of studies of neuropsychiatric disease biology.Additional information
ng.3529-S1.pdf -
Rodd, J., & Chen, A. (2016). Pitch accents show a perceptual magnet effect: Evidence of internal structure in intonation categories. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (
Eds. ), Proceedings of Speech Prosody 2016 (pp. 697-701).Abstract
The question of whether intonation events have a categorical mental representation has long been a puzzle in prosodic research, and one that experiments testing production and perception across category boundaries have failed to definitively resolve. This paper takes the alternative approach of looking for evidence of structure within a postulated category by testing for a Perceptual Magnet Effect (PME). PME has been found in boundary tones but has not previously been conclusively found in pitch accents. In this investigation, perceived goodness and discriminability of re-synthesised Dutch nuclear rise contours (L*H H%) were evaluated by naive native speakers of Dutch. The variation between these stimuli was quantified using a polynomial-parametric modelling approach (i.e. the SOCoPaSul model) in place of the traditional approach whereby excursion size, peak alignment and pitch register are used independently of each other to quantify variation between pitch accents. Using this approach to calculate the acoustic-perceptual distance between different stimuli, PME was detected: (1) rated goodness, decreased as acoustic-perceptual distance relative to the prototype increased, and (2) equally spaced items far from the prototype were less frequently generalised than equally spaced items in the neighbourhood of the prototype. These results support the concept of categorically distinct intonation events.Additional information
Link to Speech Prosody Website -
Rodenas-Cuadrado, P., Pietrafusa, N., Francavilla, T., La Neve, A., Striano, P., & Vernes, S. C. (2016). Characterisation of CASPR2 deficiency disorder - a syndrome involving autism, epilepsy and language impairment. BMC Medical Genetics, 17: 8. doi:10.1186/s12881-016-0272-8.
Abstract
Background Heterozygous mutations in CNTNAP2 have been identified in patients with a range of complex phenotypes including intellectual disability, autism and schizophrenia. However heterozygous CNTNAP2 mutations are also found in the normal population. Conversely, homozygous mutations are rare in patient populations and have not been found in any unaffected individuals. Case presentation We describe a consanguineous family carrying a deletion in CNTNAP2 predicted to abolish function of its protein product, CASPR2. Homozygous family members display epilepsy, facial dysmorphisms, severe intellectual disability and impaired language. We compared these patients with previously reported individuals carrying homozygous mutations in CNTNAP2 and identified a highly recognisable phenotype. Conclusions We propose that CASPR2 loss produces a syndrome involving early-onset refractory epilepsy, intellectual disability, language impairment and autistic features that can be recognized as CASPR2 deficiency disorder. Further screening for homozygous patients meeting these criteria, together with detailed phenotypic and molecular investigations will be crucial for understanding the contribution of CNTNAP2 to normal and disrupted development. -
Rodenas-Cuadrado, P., Chen, X. S., Wiegrebe, L., Firzlaff, U., & Vernes, S. C. (2015). A novel approach identifies the first transcriptome networks in bats: A new genetic model for vocal communication. BMC Genomics, 16: 836. doi:10.1186/s12864-015-2068-1.
Abstract
Background Bats are able to employ an astonishingly complex vocal repertoire for navigating their environment and conveying social information. A handful of species also show evidence for vocal learning, an extremely rare ability shared only with humans and few other animals. However, despite their potential for the study of vocal communication, bats remain severely understudied at a molecular level. To address this fundamental gap we performed the first transcriptome profiling and genetic interrogation of molecular networks in the brain of a highly vocal bat species, Phyllostomus discolor. Results Gene network analysis typically needs large sample sizes for correct clustering, this can be prohibitive where samples are limited, such as in this study. To overcome this, we developed a novel bioinformatics methodology for identifying robust co-expression gene networks using few samples (N=6). Using this approach, we identified tissue-specific functional gene networks from the bat PAG, a brain region fundamental for mammalian vocalisation. The most highly connected network identified represented a cluster of genes involved in glutamatergic synaptic transmission. Glutamatergic receptors play a significant role in vocalisation from the PAG, suggesting that this gene network may be mechanistically important for vocal-motor control in mammals. Conclusion We have developed an innovative approach to cluster co-expressing gene networks and show that it is highly effective in detecting robust functional gene networks with limited sample sizes. Moreover, this work represents the first gene network analysis performed in a bat brain and establishes bats as a novel, tractable model system for understanding the genetics of vocal mammalian communication.Additional information
Raw reads from the RNA sequencing in NCBI bioproject repository -
Roelofs, A., Piai, V., Garrido Rodriguez, G., & Chwilla, D. J. (2016). Electrophysiology of Cross-Language Interference and Facilitation in Picture Naming. Cortex, 76, 1-16. doi:10.1016/j.cortex.2015.12.003.
Abstract
Disagreement exists about how bilingual speakers select words, in particular, whether words in another language compete, or competition is restricted to a target language, or no competition occurs. Evidence that competition occurs but is restricted to a target language comes from response time (RT) effects obtained when speakers name pictures in one language while trying to ignore distractor words in another language. Compared to unrelated distractor words, RT is longer when the picture name and distractor are semantically related, but RT is shorter when the distractor is the translation of the name of the picture in the other language. These effects suggest that distractor words from another language do not compete themselves but activate their counterparts in the target language, thereby yielding the semantic interference and translation facilitation effects. Here, we report an event-related brain potential (ERP) study testing the prediction that priming underlies both of these effects. The RTs showed semantic interference and translation facilitation effects. Moreover, the picture-word stimuli yielded an N400 response, whose amplitude was smaller on semantic and translation trials than on unrelated trials, providing evidence that interference and facilitation priming underlie the RT effects. We present the results of computer simulations showing the utility of a within-language competition account of our findings. -
Rojas-Berscia, L. M. (2016). Lóxoro, traces of a contemporary Peruvian genderlect. Borealis: An International Journal of Hispanic Linguistics, 5, 157-170.
Abstract
Not long after the premiere of Loxoro in 2011, a short-film by Claudia Llosa which presents the problems the transgender community faces in the capital of Peru, a new language variety became visible for the first time to the Lima society. Lóxoro [‘lok.so.ɾo] or Húngaro [‘uŋ.ga.ɾo], as its speakers call it, is a language spoken by transsexuals and the gay community of Peru. The first clues about its existence were given by a comedian, Fernando Armas, in the mid 90’s, however it is said to have appeared not before the 60’s. Following some previous work on gay languages by Baker (2002) and languages and society (cf. Halliday 1978), the main aim of the present article is to provide a primary sketch of this language in its phonological, morphological, lexical and sociological aspects, based on a small corpus extracted from the film of Llosa and natural dialogues from Peruvian TV-journals, in order to classify this variety within modern sociolinguistic models (cf. Muysken 2010) and argue for the “anti-language” (cf. Halliday 1978) nature of it -
Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.
Abstract
The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family. -
Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
-
Romberg, A., Zhang, Y., Newman, B., Triesch, J., & Yu, C. (2016). Global and local statistical regularities control visual attention to object sequences. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 262-267).
Abstract
Many previous studies have shown that both infants and adults are skilled statistical learners. Because statistical learning is affected by attention, learners' ability to manage their attention can play a large role in what they learn. However, it is still unclear how learners allocate their attention in order to gain information in a visual environment containing multiple objects, especially how prior visual experience (i.e., familiarly of objects) influences where people look. To answer these questions, we collected eye movement data from adults exploring multiple novel objects while manipulating object familiarity with global (frequencies) and local (repetitions) regularities. We found that participants are sensitive to both global and local statistics embedded in their visual environment and they dynamically shift their attention to prioritize some objects over others as they gain knowledge of the objects and their distributions within the task. -
Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.
Abstract
During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing. -
Romøren, A. S. H., & Chen, A. (2015). Quiet is the new loud: Pausing and focus in child and adult Dutch. Language and Speech, 58, 8-23. doi:10.1177/0023830914563589.
Abstract
In a number of languages, prosody is used to highlight new information (or focus). In Dutch, focus is marked by accentuation, whereby focal constituents are accented and post-focal constituents are de-accented. Even if pausing is not traditionally seen as a cue to focus in Dutch, several previous studies have pointed to a possible relationship between pausing and information structure. Considering that Dutch-speaking 4 to 5 year olds are not yet completely proficient in using accentuation for focus and that children generally pause more than adults, we asked whether pausing might be an available parameter for children to manipulate for focus. Sentences with varying focus structure were elicited from 10 Dutch-speaking 4 to 5 year olds and 9 Dutch-speaking adults by means of a picture-matching game. Comparing pause durations before focal and non-focal targets showed pre-target pauses to be significantly longer when the targets were focal than when they were not. Notably, the use of pausing was more robust in the children than in the adults, suggesting that children exploit pausing to mark focus more generally than adults do, at a stage where their mastery of the canonical cues to focus is still developing. © The Author(s) 2015Files private
Request files -
Rossi, G., & Zinken, J. (2016). Grammar and social agency: The pragmatics of impersonal deontic statements. Language, 92(4), e296-e325. doi:10.1353/lan.2016.0083.
Abstract
Sentence and construction types generally have more than one pragmatic function. Impersonal deontic declaratives such as ‘it is necessary to X’ assert the existence of an obligation or necessity without tying it to any particular individual. This family of statements can accomplish a range of functions, including getting another person to act, explaining or justifying the speaker’s own behavior as he or she undertakes to do something, or even justifying the speaker’s behavior while simultaneously getting another person to help. How is an impersonal deontic declarative fit for these different functions? And how do people know which function it has in a given context? We address these questions using video recordings of everyday interactions among speakers of Italian and Polish. Our analysis results in two findings. The first is that the pragmatics of impersonal deontic declaratives is systematically shaped by (i) the relative responsibility of participants for the necessary task and (ii) the speaker’s nonverbal conduct at the time of the statement. These two factors influence whether the task in question will be dealt with by another person or by the speaker, often giving the statement the force of a request or, alternatively, of an account of the speaker’s behavior. The second finding is that, although these factors systematically influence their function, impersonal deontic declaratives maintain the potential to generate more complex interactions that go beyond a simple opposition between requests and accounts, where participation in the necessary task may be shared, negotiated, or avoided. This versatility of impersonal deontic declaratives derives from their grammatical makeup: by being deontic and impersonal, they can both mobilize or legitimize an act by different participants in the speech event, while their declarative form does not constrain how they should be responded to. These features make impersonal deontic declaratives a special tool for the management of social agency. -
Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.
Abstract
This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.Additional information
http://www.degruyter.com/view/j/opli.2014.1.issue-1/opli-2015-0002/suppl/opli-2… -
Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.
Abstract
Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion. -
Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.
Abstract
People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.Additional information
full text via Radboud Repository -
Rowbotham, S. J., Holler, J., Wearden, A., & Lloyd, D. M. (2016). I see how you feel: Recipients obtain additional information from speakers’ gestures about pain. Patient Education and Counseling, 99(8), 1333-1342. doi:10.1016/j.pec.2016.03.007.
Abstract
Objective
Despite the need for effective pain communication, pain is difficult to verbalise. Co-speech gestures frequently add information about pain that is not contained in the accompanying speech. We explored whether recipients can obtain additional information from gestures about the pain that is being described.
Methods
Participants (n = 135) viewed clips of pain descriptions under one of four conditions: 1) Speech Only; 2) Speech and Gesture; 3) Speech, Gesture and Face; and 4) Speech, Gesture and Face plus Instruction (short presentation explaining the pain information that gestures can depict). Participants provided free-text descriptions of the pain that had been described. Responses were scored for the amount of information obtained from the original clips.
Findings
Participants in the Instruction condition obtained the most information, while those in the Speech Only condition obtained the least (all comparisons p<.001).
Conclusions
Gestures produced during pain descriptions provide additional information about pain that recipients are able to pick up without detriment to their uptake of spoken information.
Practice implications
Healthcare professionals may benefit from instruction in gestures to enhance uptake of information about patients’ pain experiences. -
Rowbotham, S., Lloyd, D. M., Holler, J., & Wearden, A. (2015). Externalizing the private experience of pain: A role for co-speech gestures in pain communication? Health Communication, 30(1), 70-80. doi:10.1080/10410236.2013.836070.
Abstract
Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced “co-speech hand gestures” may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication -
Rowland, C. F., & Peter, M. (2015). Up to speed? Nursery World Magazine, 15-28 June 2015, 18-20.
-
Rubio-Fernández, P., Cummins, C., & Tian, Y. (2016). Are single and extended metaphors processed differently? A test of two Relevance-Theoretic accounts. Journal of Pragmatics, 94, 15-28. doi:10.1016/j.pragma.2016.01.005.
Abstract
Carston (2010) proposes that metaphors can be processed via two different routes. In line with the standard Relevance-Theoretic account of loose use, single metaphors are interpreted by a local pragmatic process of meaning adjustment, resulting in the construction of an ad hoc concept. In extended metaphorical passages, by contrast, the reader switches to a second processing mode because the various semantic associates in the passage are mutually reinforcing, which makes the literal meaning highly activated relative to possible meaning adjustments. In the second processing mode the literal meaning of the whole passage is metarepresented and entertained as an ‘imaginary world’ and the intended figurative implications are derived later in processing. The results of three experiments comparing the interpretation of the same target expressions across literal, single-metaphorical and extended-metaphorical contexts, using self-paced reading (Experiment 1), eye-tracking during natural reading (Experiment 2) and cued recall (Experiment 3), offered initial support to Carston's distinction between the processing of single and extended metaphors. We end with a comparison between extended metaphors and allegories, and make a call for further theoretical and experimental work to increase our understanding of the similarities and differences between the interpretation and processing of different figurative uses, single and extended. -
Rubio-Fernández, P. (2016). How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification. Frontiers in Psychology, 7: 153. doi:10.3389/fpsyg.2016.00153.
Abstract
Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation. -
Rubio-Fernández, P., & Grassmann, S. (2016). Metaphors as second labels: Difficult for preschool children? Journal of Psycholinguistic Research, 45, 931-944. doi:10.1007/s10936-015-9386-y.
Abstract
This study investigates the development of two cognitive abilities that are involved in metaphor comprehension: implicit analogical reasoning and assigning an unconventional label to a familiar entity (as in Romeo’s ‘Juliet is the sun’). We presented 3- and 4-year-old children with literal object-requests in a pretense setting (e.g., ‘Give me the train with the hat’). Both age-groups succeeded in a baseline condition that used building blocks as props (e.g., placed either on the front or the rear of a train engine) and only required spatial analogical reasoning to interpret the referential expression. Both age-groups performed significantly worse in the critical condition, which used familiar objects as props (e.g., small dogs as pretend hats) and required both implicit analogical reasoning and assigning second labels. Only the 4-year olds succeeded in this condition. These results offer a new perspective on young children’s difficulties with metaphor comprehension in the preschool years. -
Rubio-Fernández, P., & Geurts, B. (2016). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology, 7, 835-850. doi:10.1007/s13164-015-0290-z.
-
Rubio-Fernández, P., Wearing, C., & Carston, R. (2015). Metaphor and hyperbole: Testing the continuity hypothesis. Metaphor and Symbol, 30(1), 24-40. doi:10.1080/10926488.2015.980699.
Abstract
In standard Relevance Theory, hyperbole and metaphor are categorized together as loose uses of language, on a continuum with approximations, category extensions and other cases of loosening/broadening of meaning. Specifically, it is claimed that there are no interesting differences (in either interpretation or processing) between hyperbolic and metaphorical uses (Sperber & Wilson, 2008). In recent work, we have set out to provide a more fine-grained articulation of the similarities and differences between hyperbolic and metaphorical uses and their relation to literal uses (Carston & Wearing, 2011, 2014). We have defended the view that hyperbolic use involves a shift of magnitude along a dimension which is intrinsic to the encoded meaning of the hyperbole vehicle, while metaphor involves a multi-dimensional qualitative shift away from the encoded meaning of the metaphor vehicle. In this article, we present three experiments designed to test the predictions of this analysis, using a variety of tasks (paraphrase elicitation, self-paced reading and sentence verification). The results of the study support the view that hyperbolic and metaphorical interpretations, despite their commonalities as loose uses of language, are significantly different. -
De Ruiter, L. E. (2015). Information status marking in spontaneous vs. read speech in story-telling tasks – Evidence from intonation analysis using GToBI. Journal of Phonetics, 48, 29-44. doi:10.1016/j.wocn.2014.10.008.
Abstract
Two studies investigated whether speaking mode influences the way German speakers mark the information status of discourse referents in nuclear position. In Study 1, speakers produced narrations spontaneously on the basis of picture stories in which the information status of referents (new, accessible and given) was systematically varied. In Study 2, speakers saw the same pictures, but this time accompanied by text to be read out. Clear differences were found depending on speaking mode: In spontaneous speech, speakers always accented new referents. They did not use different pitch accent types to differentiate between new and accessible referents, nor did they always deaccent given referents. In addition, speakers often made use of low pitch accents in combination with high boundary tones to indicate continuity. In contrast to this, read speech was characterized by low boundary tones, consistent deaccentuation of given referents and the use of H+L* and H+!H* accents, for both new and accessible referents. The results are discussed in terms of the function of intonational features in communication. It is argued that reading intonation is not comparable to intonation in spontaneous speech, and that this has important consequences also for our choice of methodology in child language acquisition research -
Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.
Abstract
Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing -
San Roque, L., & Bergvist, H. (
Eds. ). (2015). Epistemic marking in typological perspective [Special Issue]. STUF -Language typology and universals, 68(2). -
San Roque, L. (2016). 'Where' questions and their responses in Duna (Papua New Guinea). Open Linguistics, 2(1), 85-104. doi:10.1515/opli-2016-0005.
Abstract
Despite their central role in question formation, content interrogatives in spontaneous conversation remain relatively under-explored cross-linguistically. This paper outlines the structure of ‘where’ expressions in Duna, a language spoken in Papua New Guinea, and examines where-questions in a small Duna data set in terms of their frequency, function, and the responses they elicit. Questions that ask ‘where?’ have been identified as a useful tool in studying the language of space and place, and, in the Duna case and elsewhere, show high frequency and functional flexibility. Although where-questions formulate place as an information gap, they are not always answered through direct reference to canonical places. While some question types may be especially “socially costly” (Levinson 2012), asking ‘where’ perhaps provides a relatively innocuous way of bringing a particular event or situation into focus. -
San Roque, L. (2015). Using you to get to me: Addressee perspective and speaker stance in Duna evidential marking. STUF: Language typology and universals, 68(2), 187-210. doi:10.1515/stuf-2015-0010.
Abstract
Languages have complex and varied means for representing points of view, including constructions that can express multiple perspectives on the same event. This paper presents data on two evidential constructions in the language Duna (Papua New Guinea) that imply features of both speaker and addressee knowledge simultaneously. I discuss how talking about an addressee’s knowledge can occur in contexts of both coercion and co-operation, and, while apparently empathetic, can provide a covert way to both manipulate the addressee’s attention and express speaker stance. I speculate that ultimately, however, these multiple perspective constructions may play a pro-social role in building or repairing the interlocutors’ common ground. -
San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.
Abstract
To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception. -
Sánchez-Fernández, M., & Rojas-Berscia, L. M. (2016). Vitalidad lingüística de la lengua paipai de Santa Catarina, Baja California. LIAMES, 16(1), 157-183. doi:10.20396/liames.v16i1.8646171.
Abstract
In the last few decades little to nothing has been said about the sociolinguistic situation of Yumanan languages in Mexico. In order to cope with this lack of studies, we present a first study on linguistic vitality in Paipai, as it is spoken in Santa Catarina, Baja California, Mexico. Since languages such as Mexican Spanish and Ko’ahl coexist with this language in the same ecology, both are part of the study as well. This first approach hoists from two axes: on one hand, providing a theoretical framework that explains the sociolinguistic dynamics in the ecology of the language (Mufwene 2001), and, on the other hand, bringing over a quantitative study based on MSF (Maximum Shared Facility) (Terborg & Garcìa 2011), which explains the state of linguistic vitality of paipai, enriched by qualitative information collected in situ -
Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and Language, 162, 42-45. doi:10.1016/j.bandl.2016.08.001.
Abstract
Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables. -
Sauppe, S. (2016). Verbal semantics drives early anticipatory eye movements during the comprehension of verb-initial sentences. Frontiers in Psychology, 7: 95. doi:10.3389/fpsyg.2016.00095.
Abstract
Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence “The frog will eat the fly” the syntactic object (“fly”) is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type “Eat frog fly” or “Eat fly frog” (both meaning “The frog will eat the fly”) while looking at displays containing an agent referent (“frog”), a patient referent (“fly”) and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages. -
Schaefer, M., Haun, D. B., & Tomasello, M. (2015). Fair is not fair everywhere. Psychological Science, 26(8), 1252-1260. doi:10.1177/0956797615586188.
Abstract
Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different ideas about distributive justice. Whereas children from a modern Western society distributed the spoils of a joint enterprise precisely in proportion to productivity, children from a gerontocratic pastoralist society in Africa did not take merit into account at all. Children from a partially hunter-gatherer, egalitarian African culture distributed the spoils more equally than did the other two cultures, with merit playing only a limited role. This pattern of results suggests that some basic notions of distributive justice are not universal intuitions of the human species but rather culturally constructed behavioral norms.Additional information
http://pss.sagepub.com/content/by/supplemental-data -
Schapper, A., San Roque, L., & Hendery, R. (2016). Tree, firewood and fire in the languages of Sahul. In P. Juvonen (
Ed. ), The Lexical Typology of Semantic Shifts (pp. 355-422). Berlin: de Gruyter Mouton. -
Scharenborg, O., Weber, A., & Janse, E. (2015). Age and hearing loss and the use of acoustic cues in fricative categorization. The Journal of the Acoustical Society of America, 138(3), 1408-1417. doi:10.1121/1.4927728.
Abstract
This study examined the use of fricative noise information and coarticulatory cues for categorization of word-final fricatives [s] and [f] by younger and older Dutch listeners alike. Particularly, the effect of information loss in the higher frequencies on the use of these two cues for fricative categorization was investigated. If information in the higher frequencies is less strongly available, fricative identification may be impaired or listeners may learn to focus more on coarticulatory information. The present study investigates this second possibility. Phonetic categorization results showed that both younger and older Dutch listeners use the primary cue fricative noise and the secondary cue coarticulatory information to distinguish
word-final [f] from [s]. Individual hearing sensitivity in the older listeners modified the use of fricative noise information, but did not modify the use of coarticulatory information. When high-frequency information was filtered out from the speech signal, fricative noise could no longer be used by the younger and older adults. Crucially, they also did not learn to rely more on coarticulatory information as a compensatory cue for fricative categorization. This suggests that listeners do not readily show compensatory use of this secondary cue to fricative identity when fricative categorization becomes difficult. -
Scharenborg, O., Weber, A., & Janse, E. (2015). The role of attentional abilities in lexically guided perceptual learning by older listeners. Attention, Perception & Psychophysics, 77(2), 493-507. doi:10.3758/s13414-014-0792-2.
Abstract
This study investigates two variables that may modify lexically-guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically-guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically-guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically-guided perceptual learning effects. Our results, moreover, clearly show that lexically-guided perceptual learning is not lost when acoustic processing is less accurate.
Share this page