Publications

Displaying 401 - 500 of 680
  • Moisik, S. R., & Dediu, D. (2015). Anatomical biasing and clicks: Preliminary biomechanical modelling. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 8-13). Glasgow: ICPhS.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A preliminary biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics, however, further modelling and experimental research is required to solidify the claim.
  • Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.

    Abstract

    Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Mulder, K., Dijkstra, T., & Baayen, R. H. (2015). Cross-language activation of morphological relatives in cognates: The role of orthographic overlap and task-related processing. Frontiers in Human Neuroscience, 9: 16. doi:10.3389/fnhum.2015.00016.

    Abstract

    We considered the role of orthography and task-related processing mechanisms in the activation of morphologically related complex words during bilingual word processing. So far, it has only been shown that such morphologically related words (i.e., morphological family members) are activated through the semantic and morphological overlap they share with the target word. In this study, we investigated family size effects in Dutch-English identical cognates (e.g., tent in both languages), non-identical cognates (e.g., pil and pill, in English and Dutch, respectively), and non-cognates (e.g., chicken in English). Because of their cross-linguistic overlap in orthography, reading a cognate can result in activation of family members both languages. Cognates are therefore well-suited for studying mechanisms underlying bilingual activation of morphologically complex words. We investigated family size effects in an English lexical decision task and a Dutch-English language decision task, both performed by Dutch-English bilinguals. English lexical decision showed a facilitatory effect of English and Dutch family size on the processing of English-Dutch cognates relative to English non-cognates. These family size effects were not dependent on cognate type. In contrast, for language decision, in which a bilingual context is created, Dutch and English family size effects were inhibitory. Here, the combined family size of both languages turned out to better predict reaction time than the separate family size in Dutch or English. Moreover, the combined family size interacted with cognate type: the response to identical cognates was slowed by morphological family members in both languages. We conclude that (1) family size effects are sensitive to the task performed on the lexical items, and (2) depend on both semantic and formal aspects of bilingual word processing. We discuss various mechanisms that can explain the observed family size effects in a spreading activation framework.
  • Mulder, K., Brekelmans, G., & Ernestus, M. (2015). The processing of schwa reduced cognates and noncognates in non-native listeners of English. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    In speech, words are often reduced rather than fully pronounced (e.g., (/ˈsʌmri/ for /ˈsʌməri/, summary). Non-native listeners may have problems in processing these reduced forms, because they have encountered them less often. This paper addresses the question whether this also holds for highly proficient non-natives and for words with similar forms and meanings in the non-natives' mother tongue (i.e., cognates). In an English auditory lexical decision task, natives and highly proficient Dutch non-natives of English listened to cognates and non-cognates that were presented in full or without their post-stress schwa. The data show that highly proficient learners are affected by reduction as much as native speakers. Nevertheless, the two listener groups appear to process reduced forms differently, because non-natives produce more errors on reduced cognates than on non-cognates. While listening to reduced forms, non-natives appear to be hindered by the co-activated lexical representations of cognate forms in their native language.
  • Muysken, P., Hammarström, H., Birchall, J., van Gijn, R., Krasnoukhova, O., & Müller, N. (2015). Linguistic Areas, bottom up or top down? The case of the Guaporé-Mamoré region. In B. Comrie, & L. Golluscio (Eds.), Language Contact and Documentation / Contacto lingüístico y documentación (pp. 205-238). Berlin: De Gruyter.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Narasimhan, B., Bowerman, M., Brown, P., Eisenbeiss, S., & Slobin, D. I. (2004). "Putting things in places": Effekte linguisticher Typologie auf die Sprachentwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 659-663). Göttingen: Vandenhoeck & Ruprecht.

    Abstract

    Effekte linguisticher Typologie auf die Sprach-entwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellsch
  • Neger, T. M., Rietveld, T., & Janse, E. (2015). Adult age effects in auditory statistical learning. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) and cognitive functioning in both age groups. Thirty younger and thirty older adults performed an auditory artificial-grammar-learning task to assess their statistical learning ability. In younger adults, perceptual effort came at the cost of processing resources required for learning. Inhibitory control (as indexed by Stroop colornaming performance) did not predict auditory learning. Overall, younger and older adults showed the same amount of auditory learning, indicating that statistical learning ability is preserved over the adult life span.
  • Neger, T. M., Janse, E., & Rietveld, T. (2015). Correlates of older adults' discrimination of acoustic properties in speech. Speech, Language and Hearing, 18(2), 102-115. doi:10.1179/2050572814Y.0000000055.

    Abstract

    Auditory discrimination of speech stimuli is an essential tool in speech and language therapy, e.g., in dysarthria rehabilitation. It is unclear, however, which listener characteristics are associated with the ability to perceive differences between one's own utterance and target speech. Knowledge about such associations may help to support patients participating in speech and language therapy programs that involve auditory discrimination tasks.
    Discrimination performance was evaluated in 96 healthy participants over 60 years of age as individuals with dysarthria are typically in this age group. Participants compared meaningful words and sentences on the dimensions of loudness, pitch and speech rate. Auditory abilities were assessed using pure-tone audiometry, speech audiometry and speech understanding in noise. Cognitive measures included auditory short-term memory, working memory and processing speed. Linguistic functioning was assessed by means of vocabulary knowledge and language proficiency.
    Exploratory factor analyses showed that discrimination performance was primarily associated with cognitive and linguistic skills, rather than auditory abilities. Accordingly, older adults’ discrimination performance was mainly predicted by cognitive and linguistic skills. Discrimination accuracy was higher in older adults with better speech understanding in noise, faster processing speed, and better language proficiency, but accuracy decreased with age. This raises the question whether these associations generalize to clinical populations and, if so, whether patients with better cognitive or linguistic skills may benefit more from discrimination-based therapeutic approaches than patients with poorer cognitive or linguistic abilities.
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2004). Seven years later: The effect of spelling on interpretation. In L. Cornips, & J. Doetjes (Eds.), Linguistics in the Netherlands 2004 (pp. 134-145). Amsterdam: Benjamins.
  • Newbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A. and 9 moreNewbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A., Cohen, W., Cowie, H., Steel, J., MacLean, A., Seckl, J., Bishop, D. V. M., Simkin, Z., Conti-Ramsden, G., & Pickles, A. (2004). Highly significant linkage to the SLI1 Locus in an expanded sample of Individuals affected by specific language impairment. American Journal of Human Genetics, 74(6), 1225-1238. doi:10.1086/421529.

    Abstract

    Specific language impairment (SLI) is defined as an unexplained failure to acquire normal language skills despite adequate intelligence and opportunity. We have reported elsewhere a full-genome scan in 98 nuclear families affected by this disorder, with the use of three quantitative traits of language ability (the expressive and receptive tests of the Clinical Evaluation of Language Fundamentals and a test of nonsense word repetition). This screen implicated two quantitative trait loci, one on chromosome 16q (SLI1) and a second on chromosome 19q (SLI2). However, a second independent genome screen performed by another group, with the use of parametric linkage analyses in extended pedigrees, found little evidence for the involvement of either of these regions in SLI. To investigate these loci further, we have collected a second sample, consisting of 86 families (367 individuals, 174 independent sib pairs), all with probands whose language skills are ⩾1.5 SD below the mean for their age. Haseman-Elston linkage analysis resulted in a maximum LOD score (MLS) of 2.84 on chromosome 16 and an MLS of 2.31 on chromosome 19, both of which represent significant linkage at the 2% level. Amalgamation of the wave 2 sample with the cohort used for the genome screen generated a total of 184 families (840 individuals, 393 independent sib pairs). Analysis of linkage within this pooled group strengthened the evidence for linkage at SLI1 and yielded a highly significant LOD score (MLS = 7.46, interval empirical P<.0004). Furthermore, linkage at the same locus was also demonstrated to three reading-related measures (basic reading [MLS = 1.49], spelling [MLS = 2.67], and reading comprehension [MLS = 1.99] subtests of the Wechsler Objectives Reading Dimensions).
  • Niemelä, P. T., Lattenkamp, E. Z., & Dingemanse, N. J. (2015). Personality-related survival and sampling bias in wild cricket nymphs. Behavioral Ecology, 26(3), 936-946. doi:10.1093/beheco/arv036.

    Abstract

    The study of adaptive individual behavior (“animal personality”) focuses on whether individuals differ consistently in (suites of correlated) behavior(s) and whether individual-level behavior is under selection. Evidence for selection acting on personality is biased toward species where behavioral and life-history information can readily be collected in the wild, such as ungulates and passerine birds. Here, we report estimates of repeatability and syndrome structure for behaviors that an insect (field cricket; Gryllus campestris ) expresses in the wild. We used mark-recapture models to estimate personality-related survival and encounter probability and focused on a life-history phase where all individuals could readily be sampled (the nymphal stage). As proxies for risky behaviors, we assayed maximum distance from burrow, flight initiation distance, and emergence time after disturbance; all behaviors were repeatable, but there was no evidence for strong syndrome structure. Flight initiation distance alone predicted both daily survival and encounter probability: bolder individuals were more easily observed but had a shorter life span. Individuals were also somewhat repeatable in the habitat temperature under which they were assayed. Such environment repeatability can lead to upward biases in estimates of repeatability in behavior; this was not the case. Behavioral assays were, however, conducted around the subject’s personal burrow, which could induce pseudorepeatability if burrow characteristics affected behavior. Follow-up translocation experiments allowed us to distinguish individual and burrow identity effects and provided conclusive evidence for individual repeatability of flight initiation distance. Our findings, therefore, forcefully demonstrate that personality variation exists in wild insects and that it is associated with components of fitness.
  • Nieuwland, M. S. (2015). The truth before and after: Brain potentials reveal automatic activation of event knowledge during sentence comprehension. Journal of Cognitive Neuroscience, 27(11), 2215-2228. doi:10.1162/jocn_a_00856.

    Abstract

    How does knowledge of real-world events shape our understanding of incoming language? Do temporal terms like “before” and “after” impact the online recruitment of real-world event knowledge? These questions were addressed in two ERP experiments, wherein participants read sentences that started with “before” or “after” and contained a critical word that rendered each sentence true or false (e.g., “Before/After the global economic crisis, securing a mortgage was easy/harder”). The critical words were matched on predictability, rated truth value, and semantic relatedness to the words in the sentence. Regardless of whether participants explicitly verified the sentences or not, false-after-sentences elicited larger N400s than true-after-sentences, consistent with the well-established finding that semantic retrieval of concepts is facilitated when they are consistent with real-world knowledge. However, although the truth judgments did not differ between before- and after-sentences, no such sentence N400 truth value effect occurred in before-sentences, whereas false-before-sentences elicited an enhanced subsequent positive ERPs. The temporal term “before” itself elicited more negative ERPs at central electrode channels than “after.” These patterns of results show that, irrespective of ultimate sentence truth value judgments, semantic retrieval of concepts is momentarily facilitated when they are consistent with the known event outcome compared to when they are not. However, this inappropriate facilitation incurs later processing costs as reflected in the subsequent positive ERP deflections. The results suggest that automatic activation of event knowledge can impede the incremental semantic processes required to establish sentence truth value.
  • Nijhoff, A. D., & Willems, R. M. (2015). Simulating fiction: Individual differences in literature comprehension revealed with fMRI. PLoS One, 10(2): e0116492. doi:10.1371/journal.pone.0116492.

    Abstract

    When we read literary fiction, we are transported to fictional places, and we feel and think along with the characters. Despite the importance of narrative in adult life and during development, the neurocognitive mechanisms underlying fiction comprehension are unclear. We used functional magnetic resonance imaging (fMRI) to investigate how individuals differently employ neural networks important for understanding others’ beliefs and intentions (mentalizing), and for sensori-motor simulation while listening to excerpts from literary novels. Localizer tasks were used to localize both the cortical motor network and the mentalizing network in participants after they listened to excerpts from literary novels. Results show that participants who had high activation in anterior medial prefrontal cortex (aMPFC; part of the mentalizing network) when listening to mentalizing content of literary fiction, had lower motor cortex activity when they listened to action-related content of the story, and vice versa. This qualifies how people differ in their engagement with fiction: some people are mostly drawn into a story by mentalizing about the thoughts and beliefs of others, whereas others engage in literature by simulating more concrete events such as actions. This study provides on-line neural evidence for the existence of qualitatively different styles of moving into literary worlds, and adds to a growing body of literature showing the potential to study narrative comprehension with neuroimaging methods.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2015). Exemplar effects arise in a lexical decision task, but only under adverse listening conditions. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper studies the influence of adverse listening conditions on exemplar effects in priming experiments that do not instruct participants to use their episodic memories. We conducted two lexical decision experiments, in which a prime and a target represented the same word type and could be spoken by the same or a different speaker. In Experiment 1, participants listened to clear speech, and showed no exemplar effects: they recognised repetitions by the same speaker as quickly as different speaker repetitions. In Experiment 2, the stimuli contained noise, and exemplar effects did arise. Importantly, Experiment 1 elicited longer average RTs than Experiment 2, a result that contradicts the time-course hypothesis, according to which exemplars only play a role when processing is slow. Instead, our findings support the hypothesis that exemplar effects arise under adverse listening conditions, when participants are stimulated to use their episodic memories in addition to their mental lexicons.
  • Noordman, L. G. M., Vonk, W., Cozijn, R., & Frank, S. (2015). Causal inferences and world knowledge. In E. J. O'Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 260-289). Cambridge, UK: Cambridge University Press.
  • Noordman, L. G. M., & Vonk, W. (2015). Inferences in Discourse, Psychology of. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed.) Vol. 12 (pp. 37-44). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57012-3.

    Abstract

    An inference is defined as the information that is not expressed explicitly by the text but is derived on the basis of the understander's knowledge and is encoded in the mental representation of the text. Inferencing is considered as a central component in discourse understanding. Experimental methods to detect inferences, established findings, and some developments are reviewed. Attention is paid to the relation between inference processes and the brain.
  • Norcliffe, E., Harris, A., & Jaeger, T. F. (2015). Cross-linguistic psycholinguistics and its critical role in theory development: early beginnings and recent advances. Language, Cognition and Neuroscience, 30(9), 1009-1032. doi:10.1080/23273798.2015.1080373.

    Abstract

    Recent years have seen a small but growing body of psycholinguistic research focused on typologically diverse languages. This represents an important development for the field, where theorising is still largely guided by the often implicit assumption of universality. This paper introduces a special issue of Language, Cognition and Neuroscience devoted to the topic of cross-linguistic and field-based approaches to the study of psycholinguistics. The papers in this issue draw on data from a variety of genetically and areally divergent languages, to address questions in the production and comprehension of phonology, morphology, words, and sentences. To contextualise these studies, we provide an overview of the field of cross-linguistic psycholinguistics, from its early beginnings to the present day, highlighting instances where cross-linguistic data have significantly contributed to psycholinguistic theorising.
  • Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.

    Abstract

    The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation.
  • Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.

    Abstract

    To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • O'Connor, L. (2004). Motion, transfer, and transformation: The grammar of change in Lowland Chontal. PhD Thesis, University of California at Santa Barbara, Santa Barbara.

    Abstract

    Typologies are critical tools for linguists, but typologies, like grammars, are known to leak. This book addresses the question of typological overlap from the perspective of a single language. In Lowland Chontal of Oaxaca, a language of southern Mexico, change events are expressed with three types of predicates, and each predicate type corresponds to a different language type in the well-known typology of lexicalization patterns established by Talmy and elaborated by others. O’Connor evaluates the predictive powers of the typology by examining the consequences of each predicate type in a variety of contexts, using data from narrative discourse, stimulus response, and elicitation. This is the first de­tailed look at the lexical and grammatical resources of the verbal system in Chontal and their relation to semantics of change. The analysis of how and why Chontal speakers choose among these verbal resources to achieve particular communicative and social goals serves both as a documentation of an endangered language and a theoretical contribution towards a typology of language use.
  • Ogdie, M. N., Fisher, S. E., Yang, M., Ishii, J., Francks, C., Loo, S. K., Cantor, R. M., McCracken, J. T., McGough, J. J., Smalley, S. L., & Nelson, S. F. (2004). Attention Deficit Hyperactivity Disorder: Fine mapping supports linkage to 5p13, 6q12, 16p13, and 17p11. American Journal of Human Genetics, 75(4), 661-668. doi:10.1086/424387.

    Abstract

    We completed fine mapping of nine positional candidate regions for attention-deficit/hyperactivity disorder (ADHD) in an extended population sample of 308 affected sibling pairs (ASPs), constituting the largest linkage sample of families with ADHD published to date. The candidate chromosomal regions were selected from all three published genomewide scans for ADHD, and fine mapping was done to comprehensively validate these positional candidate regions in our sample. Multipoint maximum LOD score (MLS) analysis yielded significant evidence of linkage on 6q12 (MLS 3.30; empiric P=.024) and 17p11 (MLS 3.63; empiric P=.015), as well as suggestive evidence on 5p13 (MLS 2.55; empiric P=.091). In conjunction with the previously reported significant linkage on the basis of fine mapping 16p13 in the same sample as this report, the analyses presented here indicate that four chromosomal regions—5p13, 6q12, 16p13, and 17p11—are likely to harbor susceptibility genes for ADHD. The refinement of linkage within each of these regions lays the foundation for subsequent investigations using association methods to detect risk genes of moderate effect size.
  • Orfanidou, E., McQueen, J. M., Adam, R., & Morgan, G. (2015). Segmentation of British Sign Language (BSL): Mind the gap! Quarterly journal of experimental psychology, 68, 641-663. doi:10.1080/17470218.2014.945467.

    Abstract

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms
  • Ortega, G., & Morgan, G. (2015). Input processing at first exposure to a sign language. Second language research, 19(10), 443-463. doi:10.1177/0267658315576822.

    Abstract

    There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition
  • Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity. Language Learning, 65(3), 660-668. doi:10.1111/lang.12123.

    Abstract

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents
  • Ortega, G., & Morgan, G. (2015). The effect of sign iconicity in the mental lexicon of hearing non-signers and proficient signers: Evidence of cross-modal priming. Language, Cognition and Neuroscience, 30(5), 574-585. doi:10.1080/23273798.2014.959533.

    Abstract

    The present study investigated the priming effect of iconic signs in the mental lexicon of hearing adults. Non-signers and proficient British Sign Language (BSL) users took part in a cross-modal lexical decision task. The results indicate that iconic signs activated semantically related words in non-signers' lexicon. Activation occurred regardless of the type of referent because signs depicting actions and perceptual features of an object yielded the same response times. The pattern of activation was different in proficient signers because only action signs led to cross-modal activation. We suggest that non-signers process iconicity in signs in the same way as they do gestures, but after acquiring a sign language, there is a shift in the mechanisms used to process iconic manual structures
  • Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.

    Abstract

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
  • Peeters, D. (2015). A social and neurobiological approach to pointing in speech and gesture. PhD Thesis, Radboud University, Nijmegen.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.

    Abstract

    Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
    that subserve the integration of pointing gestures and speech in visual contexts in comprehension
    are unclear. Here we present the results of an fMRI study in which participants watched images of an
    actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
    revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
    inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
    semantically integrates information across modalities and semiotic domains.
  • Perlman, M., Paul, J., & Lupyan, G. (2015). Congenitally deaf children generate iconic vocalizations to communicate magnitude. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size “frequency code”, but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
  • Perlman, M., Clark, N., & Falck, M. J. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348-1368. doi:10.1111/cogs.12190.

    Abstract

    Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size-related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech.
  • Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society Open Science, 2: 150152. doi:10.1098/rsos.150152.

    Abstract

    Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative ‘vocal’ charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic ‘meaning template’ we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
  • Perlman, M., & Clark, N. (2015). Learned vocal and breathing behavior in an enculturated gorilla. Animal Cognition, 18(5), 1165-1179. doi:10.1007/s10071-015-0889-6.

    Abstract

    We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.

    Abstract

    The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.

    Abstract

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (Eds.). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113.
  • Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.

    Abstract

    Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture
  • Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLoS One, 10(9): e0137147. doi:10.1371/journal.pone.0137147.

    Abstract

    Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.

    Additional information

    1536057.zip
  • Perry, L., Perlman, M., & Lupyan, G. (2015). Iconicity in English vocabulary and its relation to toddlers’ word learning. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
  • Peter, M., Chang, F., Pine, J. M., Blything, R., & Rowland, C. F. (2015). When and how do children develop knowledge of verb argument structure? Evidence from verb bias effects in a structural priming task. Journal of Memory and Language, 81, 1-15. doi:10.1016/j.jml.2014.12.002.

    Abstract

    In this study, we investigated when children develop adult-like verb–structure links, and examined two mechanisms, associative and error-based learning, that might explain how these verb–structure links are learned. Using structural priming, we tested children’s and adults’ ability to use verb–structure links in production in three ways; by manipulating: (1) verb overlap between prime and target, (2) target verb bias, and (3) prime verb bias. Children (aged 3–4 and 5–6 years old) and adults heard and produced double object dative (DOD) and prepositional object dative (PD) primes with DOD- and PD-biased verbs. Although all age groups showed significant evidence of structural priming, only adults showed increased priming when there was verb overlap between prime and target sentences (the lexical boost). The effect of target verb bias also grew with development. Critically, however, the effect of prime verb bias on the size of the priming effect (prime surprisal) was larger in children than in adults, suggesting that verb–structure links are present at the earliest age tested. Taken as a whole, the results suggest that children begin to acquire knowledge about verb-argument structure preferences early in acquisition, but that the ability to use adult-like verb bias in production gradually improves over development. We also argue that this pattern of results is best explained by a learning model that uses an error-based learning mechanism.
  • Petersson, K. M., Forkstam, C., & Ingvar, M. (2004). Artificial syntactic violations activate Broca’s region. Cognitive Science, 28(3), 383-407. doi:10.1207/s15516709cog2803_4.

    Abstract

    In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar.We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this studywas to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann's areas 44 and 45) or Broca's region. In the present study, we observed that artificial grammaticality violations activated Broca's region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants.
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Pettigrew, K. A., Fajutrao Valles, S. F., Moll, K., Northstone, K., Ring, S., Pennell, C., Wang, C., Leavett, R., Hayiou-Thomas, M. E., Thompson, P., Simpson, N. H., Fisher, S. E., The SLI Consortium, Whitehouse, A. J., Snowling, M. J., Newbury, D. F., & Paracchini, S. (2015). Lack of replication for the myosin-18B association with mathematical ability in independent cohorts. Genes, Brain and Behavior, 14(4), 369-376. doi:10.1111/gbb.12213.

    Abstract

    Twin studies indicate that dyscalculia (or mathematical disability) is caused partly by a genetic component, which is yet to be understood at the molecular level. Recently, a coding variant (rs133885) in the myosin-18B gene was shown to be associated with mathematical abilities with a specific effect among children with dyslexia. This association represents one of the most significant genetic associations reported to date for mathematical abilities and the only one reaching genome-wide statistical significance.

    We conducted a replication study in different cohorts to assess the effect of rs133885 maths-related measures. The study was conducted primarily using the Avon Longitudinal Study of Parents and Children (ALSPAC), (N = 3819). We tested additional cohorts including the York Cohort, the Specific Language Impairment Consortium (SLIC) cohort and the Raine Cohort, and stratified them for a definition of dyslexia whenever possible.

    We did not observe any associations between rs133885 in myosin-18B and mathematical abilities among individuals with dyslexia or in the general population. Our results suggest that the myosin-18B variant is unlikely to be a main factor contributing to mathematical abilities.
  • Piai, V., Roelofs, A., Rommers, J., & Maris, E. (2015). Beta oscillations reflect memory and motor aspects of spoken word production. Human brain mapping, 36(7), 2767-2780. doi:10.1002/hbm.22806.

    Abstract

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band.

    Additional information

    hbm22806-sup-0001-suppinfo1.docx

    Files private

    Request files
  • Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.

    Abstract

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.
  • Plomp, G., Hervais-Adelman, A., Astolfi, L., & Michel, C. M. (2015). Early recurrence and ongoing parietal driving during elementary visual processing. Scientific Reports, 5: 18733. doi:10.1038/srep18733.

    Abstract

    Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way, recurrent interactions to occur at short latencies after stimulus onset. Here we combined EEG source-imaging and Granger-causal modeling with high temporal resolution to investigate whether recurrent and top-down interactions between visual and attentional brain areas can be identified and distinguished at short latencies in humans. We investigated the directed interactions between widespread occipital, parietal and frontal areas that we localized within participants using fMRI. The connectivity results showed two-way interactions between area MT and V1 already at short latencies. In addition, the results suggested a large role for lateral parietal cortex in coordinating visual activity that may be understood as an ongoing top-down allocation of attentional resources. Our results support the notion that indirect pathways allow early, evoked driving from MT to V1 to highlight spatial locations of motion transients, while influence from parietal areas is continuously exerted around stimulus onset, presumably reflecting task-related attentional processes.
  • Poletiek, F. H., & Stolker, C. J. J. M. (2004). Who decides the worth of an arm and a leg? Assessing the monetary value of nonmonetary damage. In E. Kurz-Milcke, & G. Gigerenzer (Eds.), Experts in science and society (pp. 201-213). New York: Kluwer Academic/Plenum Publishers.
  • St Pourcain, B., Haworth, C. M. A., Davis, O. S. P., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ronald, A., Price, T., Meaburn, E., Ring, S. M., Golding, J., Hakonarson, H., Plomin, R., & Davey Smith, G. (2015). Heritability and genome-wide analyses of problematic peer relationships during childhood and adolescence. Human Genetics, 134(6), 539-551. doi:10.1007/s00439-014-1514-5.

    Abstract

    Peer behaviour plays an important role in the development of social adjustment, though little is known about its genetic architecture. We conducted a twin study combined with a genome-wide complex trait analysis (GCTA) and a genome-wide screen to characterise genetic influences on problematic peer behaviour during childhood and adolescence. This included a series of longitudinal measures (parent-reported Strengths-and-Difficulties Questionnaire) from a UK population-based birth-cohort (ALSPAC, 4–17 years), and a UK twin sample (TEDS, 4–11 years). Longitudinal twin analysis (TEDS; N ≤ 7,366 twin pairs) showed that peer problems in childhood are heritable (4–11 years, 0.60 < twin-h 2 ≤ 0.71) but genetically heterogeneous from age to age (4–11 years, twin-r g = 0.30). GCTA (ALSPAC: N ≤ 5,608, TEDS: N ≤ 2,691) provided furthermore little support for the contribution of measured common genetic variants during childhood (4–12 years, 0.02 < GCTA-h 2(Meta) ≤ 0.11) though these influences become stronger in adolescence (13–17 years, 0.14 < GCTA-h 2(ALSPAC) ≤ 0.27). A subsequent cross-sectional genome-wide screen in ALSPAC (N ≤ 6,000) focussed on peer problems with the highest GCTA-heritability (10, 13 and 17 years, 0.0002 < GCTA-P ≤ 0.03). Single variant signals (P ≤ 10−5) were followed up in TEDS (N ≤ 2835, 9 and 11 years) and, in search for autism quantitative trait loci, explored within two autism samples (AGRE: N Pedigrees = 793; ACC: N Cases = 1,453/N Controls = 7,070). There was, however, no evidence for association in TEDS and little evidence for an overlap with the autistic continuum. In summary, our findings suggest that problematic peer relationships are heritable but genetically complex and heterogeneous from age to age, with an increase in common measurable genetic variation during adolescence.
  • Pouw, W., & Looren de Jong, H. (2015). Rethinking situated and embodied social psychology. Theory and Psychology, 25(4), 411-433. doi:10.1177/0959354315585661.

    Abstract

    This article aims to explore the scope of a Situated and Embodied Social Psychology (ESP). At first sight, social cognition seems embodied cognition par excellence. Social cognition is first and foremost a supra-individual, interactive, and dynamic process (Semin & Smith, 2013). Radical approaches in Situated/Embodied Cognitive Science (Enactivism) claim that social cognition consists in an emergent pattern of interaction between a continuously coupled organism and the (social) environment; it rejects representationalist accounts of cognition (Hutto & Myin, 2013). However, mainstream ESP (Barsalou, 1999, 2008) still takes a rather representation-friendly approach that construes embodiment in terms of specific bodily formatted representations used (activated) in social cognition. We argue that mainstream ESP suffers from vestiges of theoretical solipsism, which may be resolved by going beyond internalistic spirit that haunts mainstream ESP today.
  • Randall, J., Van Hout, A., Weissenborn, J., & Baayen, R. H. (2004). Acquiring unaccusativity: A cross-linguistic look. In A. Alexiadou (Ed.), The unaccusativity puzzle (pp. 332-353). Oxford: Oxford University Press.
  • Ravignani, A., & Sonnweber, R. (2015). Measuring teaching through hormones and time series analysis: Towards a comparative framework. Behavioral and Brain Sciences, 38, 40-41. doi:10.1017/S0140525X14000806.

    Abstract


    In response to: How to learn about teaching: An evolutionary framework for the study of teaching behavior in humans and other animals

    Arguments about the nature of teaching have depended principally on naturalistic observation and some experimental work. Additional measurement tools, and physiological variations and manipulations can provide insights on the intrinsic structure and state of the participants better than verbal descriptions alone: namely, time-series analysis, and examination of the role of hormones and neuromodulators on the behaviors of teacher and pupil.
  • Ravignani, A., Westphal-Fitch, G., Aust, U., Schlumpp, M. M., & Fitch, W. T. (2015). More than one way to see it: Individual heuristics in avian visual computation. Cognition, 143, 13-24. doi:10.1016/j.cognition.2015.05.021.

    Abstract

    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally.

    Additional information

    Supplementary data
  • Ravignani, A. (2015). Evolving perceptual biases for antisynchrony: A form of temporal coordination beyond synchrony. Frontiers in Neuroscience, 9: 339. doi:10.3389/fnins.2015.00339.
  • Reesink, G. (2004). Interclausal relations. In G. Booij (Ed.), Morphologie / morphology (pp. 1202-1207). Berlin: Mouton de Gruyter.
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.

    Abstract

    Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism.
  • Roberts, S. G. (2015). Commentary: Large-scale psychological differences within China explained by rice vs. wheat agriculture. Frontiers in Psychology, 6: 950. doi:10.3389/fpsyg.2015.00950.

    Abstract

    Talhelm et al. (2014) test the hypothesis that activities which require more intensive collaboration foster more collectivist cultures. They demonstrate that a measure of collectivism correlates with the proportion of cultivated land devoted to rice paddies, which require more work to grow and maintain than other grains. The data come from individual measures of provinces in China. While the data is analyzed carefully, one aspect that is not directly controlled for is the historical relations between these provinces. Spurious correlations can occur between cultural traits that are inherited from ancestor cultures or borrowed through contact, what is commonly known as Galton's problem (Roberts and Winters, 2013). Effectively, Talhelm et al. treat the measures of each province as independent samples, while in reality both farming practices (e.g., Renfrew, 1997; Diamond and Bellwood, 2003; Lee and Hasegawa, 2011) and cultural values (e.g., Currie et al., 2010; Bulbulia et al., 2013) can be inherited or borrowed. This means that the data may be composed of non-independent points, inflating the apparent correlation between rice growing and collectivism. The correlation between farming practices and collectivism may be robust, but this cannot be known without an empirical control for the relatedness of the samples. Talhelm et al. do discuss this problem in the supplementary materials of their paper. They acknowledge that a phylogenetic analysis could be used to control for relatedness, but that at the time of publication there were no genetic or linguistic trees of descent which are detailed enough to suit this purpose. In this commentary I would like to make two points. First, since the original publication, researchers have created new linguistic trees that can provide the needed resolution. For example, the Glottolog phylogeny (Hammarström et al., 2015) has at least three levels of classification for the relevant varieties, though this does not have branch lengths (see also “reference” trees produced in List et al., 2014). Another recently published phylogeny uses lexical data to construct a phylogenetic tree for many language varieties within China (List et al., 2014). In this commentary I use these lexical data to estimate cultural contact between different provinces, and test whether these measures explain variation in rice farming pracices. However, the second point is that Talhelm et al. focus on descent (vertical transmission), while it may be relevant to control for both descent and borrowing (horizontal transmission). In this case, all that is needed is some measure of cultural contact between groups, not necessarily a unified tree of descent. I use a second source of linguistic data to calculate simple distances between languages based directly on the lexicon. These distances reflect borrowing as well as descent.
  • Roberts, S. G., Winters, J., & Chen, K. (2015). Future tense and economic decisions: Controlling for cultural evolution. PLoS One, 10(7): e0132145. doi:10.1371/journal.pone.0132145.

    Abstract

    A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies.
  • Roberts, S. G., Everett, C., & Blasi, D. (2015). Exploring potential climate effects on the evolution of human sound systems. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences [ICPhS 2015] Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 14-19). Glasgow: ICPHS.

    Abstract

    We suggest that it is now possible to conduct research on a topic which might be called evolutionary geophonetics. The main question is how the climate influences the evolution of language. This involves biological adaptations to the climate that may affect biases in production and perception; cultural evolutionary adaptations of the sounds of a language to climatic conditions; and influences of the climate on language diversity and contact. We discuss these ideas with special reference to a recent hypothesis that lexical tone is not adaptive in dry climates (Everett, Blasi & Roberts, 2015).
  • Roberts, S. G., Torreira, F., & Levinson, S. C. (2015). The effects of processing and sequence organisation on the timing of turn taking: A corpus study. Frontiers in Psychology, 6: 509. doi:10.3389/fpsyg.2015.00509.

    Abstract

    The timing of turn taking in conversation is extremely rapid given the cognitive demands on speakers to comprehend, plan and execute turns in real time. Findings from psycholinguistics predict that the timing of turn taking is influenced by demands on processing, such as word frequency or syntactic complexity. An alternative view comes from the field of conversation analysis, which predicts that the rules of turn-taking and sequence organization may dictate the variation in gap durations (e.g. the functional role of each turn in communication). In this paper, we estimate the role of these two different kinds of factors in determining the speed of turn-taking in conversation. We use the Switchboard corpus of English telephone conversation, already richly annotated for syntactic structure speech act sequences, and segmental alignment. To this we add further information including Floor Transfer Offset (the amount of time between the end of one turn and the beginning of the next), word frequency, concreteness, and surprisal values. We then apply a novel statistical framework ('random forests') to show that these two dimensions are interwoven together with indexical properties of the speakers as explanatory factors determining the speed of response. We conclude that an explanation of the of the timing of turn taking will require insights from both processing and sequence organisation.
  • Rodenas-Cuadrado, P., Chen, X. S., Wiegrebe, L., Firzlaff, U., & Vernes, S. C. (2015). A novel approach identifies the first transcriptome networks in bats: A new genetic model for vocal communication. BMC Genomics, 16: 836. doi:10.1186/s12864-015-2068-1.

    Abstract

    Background Bats are able to employ an astonishingly complex vocal repertoire for navigating their environment and conveying social information. A handful of species also show evidence for vocal learning, an extremely rare ability shared only with humans and few other animals. However, despite their potential for the study of vocal communication, bats remain severely understudied at a molecular level. To address this fundamental gap we performed the first transcriptome profiling and genetic interrogation of molecular networks in the brain of a highly vocal bat species, Phyllostomus discolor. Results Gene network analysis typically needs large sample sizes for correct clustering, this can be prohibitive where samples are limited, such as in this study. To overcome this, we developed a novel bioinformatics methodology for identifying robust co-expression gene networks using few samples (N=6). Using this approach, we identified tissue-specific functional gene networks from the bat PAG, a brain region fundamental for mammalian vocalisation. The most highly connected network identified represented a cluster of genes involved in glutamatergic synaptic transmission. Glutamatergic receptors play a significant role in vocalisation from the PAG, suggesting that this gene network may be mechanistically important for vocal-motor control in mammals. Conclusion We have developed an innovative approach to cluster co-expressing gene networks and show that it is highly effective in detecting robust functional gene networks with limited sample sizes. Moreover, this work represents the first gene network analysis performed in a bat brain and establishes bats as a novel, tractable model system for understanding the genetics of vocal mammalian communication.
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (Eds.), Natural language generation. (pp. 1-10). Berlin: Springer.

    Abstract

    Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
    The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
  • Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.

    Abstract

    The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family.
  • Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.

    Abstract

    During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing.
  • Romøren, A. S. H., & Chen, A. (2015). Quiet is the new loud: Pausing and focus in child and adult Dutch. Language and Speech, 58, 8-23. doi:10.1177/0023830914563589.

    Abstract

    In a number of languages, prosody is used to highlight new information (or focus). In Dutch, focus is marked by accentuation, whereby focal constituents are accented and post-focal constituents are de-accented. Even if pausing is not traditionally seen as a cue to focus in Dutch, several previous studies have pointed to a possible relationship between pausing and information structure. Considering that Dutch-speaking 4 to 5 year olds are not yet completely proficient in using accentuation for focus and that children generally pause more than adults, we asked whether pausing might be an available parameter for children to manipulate for focus. Sentences with varying focus structure were elicited from 10 Dutch-speaking 4 to 5 year olds and 9 Dutch-speaking adults by means of a picture-matching game. Comparing pause durations before focal and non-focal targets showed pre-target pauses to be significantly longer when the targets were focal than when they were not. Notably, the use of pausing was more robust in the children than in the adults, suggesting that children exploit pausing to mark focus more generally than adults do, at a stage where their mastery of the canonical cues to focus is still developing. © The Author(s) 2015

    Files private

    Request files
  • Rossano, F. (2004). Per una semiotica dell'interazione: Analisi del rapporto tra sguardo, corpo e parola in alcune interazione faccia a faccia. Master Thesis, Università di Bologna, Bologna, Italy.
  • Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.
  • Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.

    Abstract

    Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion.
  • Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.

    Additional information

    full text via Radboud Repository
  • Rowbotham, S., Lloyd, D. M., Holler, J., & Wearden, A. (2015). Externalizing the private experience of pain: A role for co-speech gestures in pain communication? Health Communication, 30(1), 70-80. doi:10.1080/10410236.2013.836070.

    Abstract

    Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced “co-speech hand gestures” may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication
  • Rowland, C. F., & Peter, M. (2015). Up to speed? Nursery World Magazine, 15-28 June 2015, 18-20.
  • Rubio-Fernández, P., Wearing, C., & Carston, R. (2015). Metaphor and hyperbole: Testing the continuity hypothesis. Metaphor and Symbol, 30(1), 24-40. doi:10.1080/10926488.2015.980699.

    Abstract

    In standard Relevance Theory, hyperbole and metaphor are categorized together as loose uses of language, on a continuum with approximations, category extensions and other cases of loosening/broadening of meaning. Specifically, it is claimed that there are no interesting differences (in either interpretation or processing) between hyperbolic and metaphorical uses (Sperber & Wilson, 2008). In recent work, we have set out to provide a more fine-grained articulation of the similarities and differences between hyperbolic and metaphorical uses and their relation to literal uses (Carston & Wearing, 2011, 2014). We have defended the view that hyperbolic use involves a shift of magnitude along a dimension which is intrinsic to the encoded meaning of the hyperbole vehicle, while metaphor involves a multi-dimensional qualitative shift away from the encoded meaning of the metaphor vehicle. In this article, we present three experiments designed to test the predictions of this analysis, using a variety of tasks (paraphrase elicitation, self-paced reading and sentence verification). The results of the study support the view that hyperbolic and metaphorical interpretations, despite their commonalities as loose uses of language, are significantly different.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, L. E. (2015). Information status marking in spontaneous vs. read speech in story-telling tasks – Evidence from intonation analysis using GToBI. Journal of Phonetics, 48, 29-44. doi:10.1016/j.wocn.2014.10.008.

    Abstract

    Two studies investigated whether speaking mode influences the way German speakers mark the information status of discourse referents in nuclear position. In Study 1, speakers produced narrations spontaneously on the basis of picture stories in which the information status of referents (new, accessible and given) was systematically varied. In Study 2, speakers saw the same pictures, but this time accompanied by text to be read out. Clear differences were found depending on speaking mode: In spontaneous speech, speakers always accented new referents. They did not use different pitch accent types to differentiate between new and accessible referents, nor did they always deaccent given referents. In addition, speakers often made use of low pitch accents in combination with high boundary tones to indicate continuity. In contrast to this, read speech was characterized by low boundary tones, consistent deaccentuation of given referents and the use of H+L* and H+!H* accents, for both new and accessible referents. The results are discussed in terms of the function of intonational features in communication. It is argued that reading intonation is not comparable to intonation in spontaneous speech, and that this has important consequences also for our choice of methodology in child language acquisition research
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.

    Abstract

    Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing
  • San Roque, L., & Bergvist, H. (Eds.). (2015). Epistemic marking in typological perspective [Special Issue]. STUF -Language typology and universals, 68(2).
  • San Roque, L. (2015). Using you to get to me: Addressee perspective and speaker stance in Duna evidential marking. STUF: Language typology and universals, 68(2), 187-210. doi:10.1515/stuf-2015-0010.

    Abstract

    Languages have complex and varied means for representing points of view, including constructions that can express multiple perspectives on the same event. This paper presents data on two evidential constructions in the language Duna (Papua New Guinea) that imply features of both speaker and addressee knowledge simultaneously. I discuss how talking about an addressee’s knowledge can occur in contexts of both coercion and co-operation, and, while apparently empathetic, can provide a covert way to both manipulate the addressee’s attention and express speaker stance. I speculate that ultimately, however, these multiple perspective constructions may play a pro-social role in building or repairing the interlocutors’ common ground.
  • San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.

    Abstract

    To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Schaefer, M., Haun, D. B., & Tomasello, M. (2015). Fair is not fair everywhere. Psychological Science, 26(8), 1252-1260. doi:10.1177/0956797615586188.

    Abstract

    Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different ideas about distributive justice. Whereas children from a modern Western society distributed the spoils of a joint enterprise precisely in proportion to productivity, children from a gerontocratic pastoralist society in Africa did not take merit into account at all. Children from a partially hunter-gatherer, egalitarian African culture distributed the spoils more equally than did the other two cultures, with merit playing only a limited role. This pattern of results suggests that some basic notions of distributive justice are not universal intuitions of the human species but rather culturally constructed behavioral norms.

Share this page