Displaying 501 - 600 of 823
-
Onnink, A. M. H., Zwiers, M. P., Hoogman, M., Mostert, J. C., Kan, C. C., Buitelaar, J., & Franke, B. (2014). Brain alterations in adult ADHD: Effects of gender, treatment and comorbid depression. European Neuropsychopharmacology, 24(3), 397-409. doi:10.1016/j.euroneuro.2013.11.011.
Abstract
Children with attention-deficit/hyperactivity disorder (ADHD) have smaller volumes of total brain matter and subcortical regions, but it is unclear whether these represent delayed maturation or persist into adulthood. We performed a structural MRI study in 119 adult ADHD patients and 107 controls and investigated total gray and white matter and volumes of accumbens, caudate, globus pallidus, putamen, thalamus, amygdala and hippocampus. Additionally, we investigated effects of gender, stimulant treatment and history of major depression (MDD). There was no main effect of ADHD on the volumetric measures, nor was any effect observed in a secondary voxel-based morphometry (VBM) analysis of the entire brain. However, in the volumetric analysis a significant gender by diagnosis interaction was found for caudate volume. Male patients showed reduced right caudate volume compared to male controls, and caudate volume correlated with hyperactive/impulsive symptoms. Furthermore, patients using stimulant treatment had a smaller right hippocampus volume compared to medication-naïve patients and controls. ADHD patients with previous MDD showed smaller hippocampus volume compared to ADHD patients with no MDD. While these data were obtained in a cross-sectional sample and need to be replicated in a longitudinal study, the findings suggest that developmental brain differences in ADHD largely normalize in adulthood. Reduced caudate volume in male patients may point to distinct neurobiological deficits underlying ADHD in the two genders. Smaller hippocampus volume in ADHD patients with previous MDD is consistent with neurobiological alterations observed in MDD.Files private
Request files -
Ortega, G. (2013). Acquisition of a signed phonological system by hearing adults: The Role of sign structure and iconcity. PhD Thesis, University College London, London.
Abstract
The phonological system of a sign language comprises meaningless sub-lexical units that define the structure of a sign. A number of studies have examined how learners of a sign language as a first language (L1) acquire these components. However, little is understood about the mechanism by which hearing adults develop visual phonological categories when learning a sign language as a second language (L2). Developmental studies have shown that sign complexity and iconicity, the clear mapping between the form of a sign and its referent, shape in different ways the order of emergence of a visual phonology. The aim of the present dissertation was to investigate how these two factors affect the development of a visual phonology in hearing adults learning a sign language as L2. The empirical data gathered in this dissertation confirms that sign structure and iconicity are important factors that determine L2 phonological development. Non-signers perform better at discriminating the contrastive features of phonologically simple signs than signs with multiple elements. Handshape was the parameter most difficult to learn, followed by movement, then orientation and finally location which is the same order of acquisition reported in L1 sign acquisition. In addition, the ability to access the iconic properties of signs had a detrimental effect in phonological development because iconic signs were consistently articulated less accurately than arbitrary signs. Participants tended to retain the iconic elements of signs but disregarded their exact phonetic structure. Further, non-signers appeared to process iconic signs as iconic gestures at least at the early stages of sign language acquisition. The empirical data presented in this dissertation suggest that non-signers exploit their gestural system as scaffolding of the new manual linguistic system and that sign L2 phonological development is strongly influenced by the structural complexity of a sign and its degree of iconicity. -
Ortega, G. (2014). Acquisition of a signed phonological system by hearing adults: The role of sign structure and iconicity. Sign Language and Linguistics, 17, 267-275. doi:10.1075/sll.17.2.09ort.
-
Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].
Abstract
Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately. -
Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.Abstract
Early studies investigating sign language acquisition claimed
that signs whose structures are motivated by the form of their
referent (iconic) are not favoured in language development.
However, recent work has shown that the first signs in deaf
children’s lexicon are iconic. In this paper we go a step
further and ask whether different types of iconicity modulate
learning sign-referent links. Results from a picture description
task indicate that children and adults used signs with two
possible variants differentially. While children signing to
adults favoured variants that map onto actions associated with
a referent (action signs), adults signing to another adult
produced variants that map onto objects’ perceptual features
(perceptual signs). Parents interacting with children used
more action variants than signers in adult-adult interactions.
These results are in line with claims that language
development is tightly linked to motor experience and that
iconicity can be a communicative strategy in parental input. -
Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (
Eds. ), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer. -
Otake, T., & Cutler, A. (2013). Lexical selection in action: Evidence from spontaneous punning. Language and Speech, 56(4), 555-573. doi:10.1177/0023830913478933.
Abstract
Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures. -
Ozturk, O., Shayan, S., Liszkowski, U., & Majid, A. (2013). Language is not necessary for color categories. Developmental Science, 16, 111-115. doi:10.1111/desc.12008.
Abstract
The origin of color categories is under debate. Some researchers argue that color categories are linguistically constructed, while others claim they have a pre-linguistic, and possibly even innate, basis. Although there is some evidence that 4–6-month-old infants respond categorically to color, these empirical results have been challenged in recent years. First, it has been claimed that previous demonstrations of color categories in infants may reflect color preferences instead. Second, and more seriously, other labs have reported failing to replicate the basic findings at all. In the current study we used eye-tracking to test 8-month-old infants’ categorical perception of a previously attested color boundary (green–blue) and an additional color boundary (blue–purple). Our results show that infants are faster and more accurate at fixating targets when they come from a different color category than when from the same category (even though the chromatic separation sizes were equated). This is the case for both blue–green and blue–purple. Our findings provide independent evidence for the existence of color categories in pre-linguistic infants, and suggest that categorical perception of color can occur without color language. -
Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.
Abstract
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. -
Pacheco, A., Araújo, S., Faísca, L., de Castro, S. L., Petersson, K. M., & Reis, A. (2014). Dyslexia's heterogeneity: Cognitive profiling of Portuguese children with dyslexia. Reading and Writing, 27(9), 1529-1545. doi:10.1007/s11145-014-9504-5.
Abstract
Recent studies have emphasized that developmental dyslexia is a multiple-deficit disorder, in contrast to the traditional single-deficit view. In this context, cognitive profiling of children with dyslexia may be a relevant contribution to this unresolved discussion. The aim of this study was to profile 36 Portuguese children with dyslexia from the 2nd to 5th grade. Hierarchical cluster analysis was used to group participants according to their phonological awareness, rapid automatized naming, verbal short-term memory, vocabulary, and nonverbal intelligence abilities. The results suggested a two-cluster solution: a group with poorer performance on phoneme deletion and rapid automatized naming compared with the remaining variables (Cluster 1) and a group characterized by underperforming on the variables most related to phonological processing (phoneme deletion and digit span), but not on rapid automatized naming (Cluster 2). Overall, the results seem more consistent with a hybrid perspective, such as that proposed by Pennington and colleagues (2012), for understanding the heterogeneity of dyslexia. The importance of characterizing the profiles of individuals with dyslexia becomes clear within the context of constructing remediation programs that are specifically targeted and are more effective in terms of intervention outcome.Additional information
11145_2014_9504_MOESM1_ESM.doc -
Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. L. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157-173. doi:10.1016/j.cognition.2013.10.005.
Abstract
We report an investigation of aging and individual differences in binding information during sentence understanding. An age-continuous sample of adults (N=91), ranging from 18 to 81 years of age, read sentences in which a relative clause could be attached high to a head noun NP1, attached low to its modifying prepositional phrase NP2 (e.g., The son of the princess who scratched himself/herself in public was humiliated), or in which the attachment site of the relative clause was ultimately indeterminate (e.g., The maid of the princess who scratched herself in public was humiliated). Word-by-word reading times and comprehension (e.g., who scratched?) were measured. A series of mixed-effects models were fit to the data, revealing: (1) that, on average, NP1-attached sentences were harder to process and comprehend than NP2-attached sentences; (2) that these average effects were independently moderated by verbal working memory capacity and reading experience, with effects that were most pronounced in the oldest participants and; (3) that readers on average did not allocate extra time to resolve global ambiguities, though older adults with higher working memory span did. Findings are discussed in relation to current models of lifespan cognitive development, working memory, language experience, and the role of prosodic segmentation strategies in reading. Collectively, these data suggest that aging brings differences in sentence understanding, and these differences may depend on independent influences of verbal working memory capacity and reading experience.Files private
Request files -
Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.
Abstract
We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms. -
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.Abstract
In everyday communication, people not only use speech but
also hand gestures to convey information. One intriguing
question in gesture research has been why gestures take the
specific form they do. Previous research has identified the
speaker-gesturer’s communicative intent as one factor
shaping the form of iconic gestures. Here we investigate
whether communicative intent also shapes the form of
pointing gestures. In an experimental setting, twenty-four
participants produced pointing gestures identifying a referent
for an addressee. The communicative intent of the speakergesturer
was manipulated by varying the informativeness of
the pointing gesture. A second independent variable was the
presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.Additional information
http://mindmodeling.org/cogsci2013/papers/0219/index.html -
Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.
Abstract
Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed. -
Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.
Abstract
Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech. -
Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society. -
Perlman, M., Clark, N., & Tanner, J. (2014). Iconicity and ape gesture. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (
Eds. ), The Evolution of Language: Proceedings of the 10th International Conference (pp. 236-243). New Jersey: World Scientific.Abstract
Iconic gestures are hypothesized to be c rucial to the evolution of language. Yet the important question of whether apes produce iconic gestures is the subject of considerable debate. This paper presents the current state of research on iconicity in ape gesture. In particular, it describes some of the empirical evidence suggesting that apes produce three different kinds of iconic gestures; it compares the iconicity hypothesis to other major hypotheses of ape gesture; and finally, it offers some directions for future ape gesture research -
Perlman, M., & Cain, A. A. (2014). Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture, 14(3), 320-350. doi:10.1075/gest.14.3.03per.
Abstract
Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic. -
Perlman, M., & Gibbs, R. W. (2013). Pantomimic gestures reveal the sensorimotor imagery of a human-fostered gorilla. Journal of Mental Imagery, 37(3/4), 73-96.
Abstract
This article describes the use of pantomimic gestures by the human-fostered gorilla, Koko, as evidence of her sensorimotor imagery. We present five video recorded instances of Koko's spontaneously created pantomimes during her interactions with human caregivers. The precise movements and context of each gesture are described in detail to examine how it functions to communicate Koko's requests for various objects and actions to be performed. Analysis assess the active "iconicity" of each targeted gesture and examines the underlying elements of sensorimotor imagery that are incorporated by the gesture. We suggest that Koko's pantomimes reflect an imaginative understanding of different actions, objects, and events that is similar in important respects with humans' embodied imagery capabilities. -
Petzell, M., & Hammarström, H. (2013). Grammatical and lexical subclassification of the Morogoro region, Tanzania. Nordic journal of African Studies, 22(3), 129-157.
Abstract
This article discusses lexical and grammatical comparison and sub-grouping in a set of closely related Bantu language varieties in the Morogoro region, Tanzania. The Greater Ruvu Bantu language varieties include Kagulu [G12], Zigua [G31], Kwere [G32], Zalamo [G33], Nguu [G34], Luguru [G35], Kami [G36] and Kutu [G37]. The comparison is based on 27 morphophonological and morphosyntactic parameters, supplemented by a lexicon of 500 items. In order to determine the relationships and boundaries between the varieties, grammatical phenomena constitute a valuable complement to counting the number of identical words or cognates. We have used automated cognate judgment methods, as well as manual cognate judgments based on older sources, in order to compare lexical data. Finally, we have included speaker attitudes (i.e. self-assessment of linguistic similarity) in an attempt to map whether the languages that are perceived by speakers as being linguistically similar really are closely related. -
Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.
Abstract
Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control. -
Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.
Abstract
According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.Additional information
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0088674#s5 -
Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.
Abstract
A fundamental ability of speakers is to
quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
manipulated by presenting pictures (e.g., dog) simultaneously with distractor
words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
but temporally overlapping. Phase-locked activity in left middle temporal
gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
degrees of effort in resolving the competition among the alternatives
words, as refl ected in the naming times. These findings characterize distinct
patterns of brain activity associated with lexical activation and competition
respectively, and their temporal relation, supporting the theory that words are selected by competition. -
Piai, V. (2014). Choosing our words: Lexical competition and the involvement of attention in spoken word production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.
Abstract
Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity. -
Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.
Abstract
Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production. -
Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.
Abstract
Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.Additional information
http://www.sciencedirect.com/science/article/pii/S0093934X13001727#m0005 -
Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
-
Pinget, A.-F., Bosker, H. R., Quené, H., & de Jong, N. H. (2014). Native speakers' perceptions of fluency and accent in L2 speech. Language Testing, 31, 349-365. doi:10.1177/0265532214526177.
Abstract
Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal, segmental and suprasegmental properties of speech on the one hand, and fluency and accent as rated by native raters on the other hand. For 90 speech fragments from Turkish and English L2 learners of Dutch, several acoustic measures of fluency and accent were calculated. In Experiment 1, 20 native speakers of Dutch rated the L2 Dutch samples on fluency. In Experiment 2, 20 different untrained native speakers of Dutch judged the L2 Dutch samples on accentedness. Regression analyses revealed that acoustic measures of fluency were good predictors of fluency ratings. Secondly, segmental and suprasegmental measures of accent could predict some variance of accent ratings. Thirdly, perceived fluency and perceived accent were only weakly related. In conclusion, this study shows that fluency and perceived foreign accent can be judged as separate constructs. -
Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.
Abstract
Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity -
Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.
Abstract
This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.Additional information
http://www.sciencedirect.com/science/article/pii/S0095447014000588#appd005 -
Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.
Abstract
Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations. -
St Pourcain, B., Whitehouse, A. J. O., Ang, W. Q., Warrington, N. M., Glessner, J. T., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., Hakonarson, H., Pennell, C. E., & Smith, G. (2013). Common variation contributes to the genetic architecture of social communication traits. Molecular Autism, 4: 34. doi:10.1186/2040-2392-4-34.
Abstract
Background: Social communication difficulties represent an autistic trait that is highly heritable and persistent during the course of development. However, little is known about the underlying genetic architecture of this phenotype. Methods: We performed a genome-wide association study on parent-reported social communication problems using items of the children’s communication checklist (age 10 to 11 years) studying single and/or joint marker effects. Analyses were conducted in a large UK population-based birth cohort (Avon Longitudinal Study of Parents and their Children, ALSPAC, N = 5,584) and followed-up within a sample of children with comparable measures from Western Australia (RAINE, N = 1364). Results: Two of our seven independent top signals (P- discovery <1.0E-05) were replicated (0.009 < P- replication ≤0.02) within RAINE and suggested evidence for association at 6p22.1 (rs9257616, meta-P = 2.5E-07) and 14q22.1 (rs2352908, meta-P = 1.1E-06). The signal at 6p22.1 was identified within the olfactory receptor gene cluster within the broader major histocompatibility complex (MHC) region. The strongest candidate locus within this genomic area was TRIM27. This gene encodes an ubiquitin E3 ligase, which is an interaction partner of methyl-CpG-binding domain (MBD) proteins, such as MBD3 and MBD4, and rare protein-coding mutations within MBD3 and MBD4 have been linked to autism. The signal at 14q22.1 was found within a gene-poor region. Single-variant findings were complemented by estimations of the narrow-sense heritability in ALSPAC suggesting that approximately a fifth of the phenotypic variance in social communication traits is accounted for by joint additive effects of genotyped single nucleotide polymorphisms throughout the genome (h2(SE) = 0.18(0.066), P = 0.0027). Conclusion: Overall, our study provides both joint and single-SNP-based evidence for the contribution of common polymorphisms to variation in social communication phenotypes.Additional information
http://static-content.springer.com/esm/art%3A10.1186%2F2040-2392-4-34/MediaObje… -
St Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L. and 12 moreSt Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L., Price, T. S., Dale, P. S., Pillas, D., Yliherva, A., Rodriguez, A., Golding, J., Jaddoe, V. W., Jarvelin, M.-R., Plomin, R., Pennell, C. E., Tiemeier, H., & Davey Smith, G. (2014). Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 5: 4831. doi:10.1038/ncomms5831.
-
St Pourcain, B., Skuse, D. H., Mandy, W. P., Wang, K., Hakonarson, H., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., & Smith, G. D. (2014). Variability in the common genetic architecture of social-communication spectrum phenotypes during childhood and adolescence. Molecular Autism, 5: 18. doi:10.1186/2040-2392-5-18.
Abstract
Background Social-communication abilities are heritable traits, and their impairments overlap with the autism continuum. To characterise the genetic architecture of social-communication difficulties developmentally and identify genetic links with the autistic dimension, we conducted a genome-wide screen of social-communication problems at multiple time-points during childhood and adolescence. Methods Social-communication difficulties were ascertained at ages 8, 11, 14 and 17 years in a UK population-based birth cohort (Avon Longitudinal Study of Parents and Children; N ≤ 5,628) using mother-reported Social Communication Disorder Checklist scores. Genome-wide Complex Trait Analysis (GCTA) was conducted for all phenotypes. The time-points with the highest GCTA heritability were subsequently analysed for single SNP association genome-wide. Type I error in the presence of measurement relatedness and the likelihood of observing SNP signals near known autism susceptibility loci (co-location) were assessed via large-scale, genome-wide permutations. Association signals (P ≤ 10−5) were also followed up in Autism Genetic Resource Exchange pedigrees (N = 793) and the Autism Case Control cohort (Ncases/Ncontrols = 1,204/6,491). Results GCTA heritability was strongest in childhood (h2(8 years) = 0.24) and especially in later adolescence (h2(17 years) = 0.45), with a marked drop during early to middle adolescence (h2(11 years) = 0.16 and h2(14 years) = 0.08). Genome-wide screens at ages 8 and 17 years identified for the latter time-point evidence for association at 3p22.2 near SCN11A (rs4453791, P = 9.3 × 10−9; genome-wide empirical P = 0.011) and suggestive evidence at 20p12.3 at PLCB1 (rs3761168, P = 7.9 × 10−8; genome-wide empirical P = 0.085). None of these signals contributed to risk for autism. However, the co-location of population-based signals and autism susceptibility loci harbouring rare mutations, such as PLCB1, is unlikely to be due to chance (genome-wide empirical Pco-location = 0.007). Conclusions Our findings suggest that measurable common genetic effects for social-communication difficulties vary developmentally and that these changes may affect detectable overlaps with the autism spectrum.Additional information
13229_2013_113_MOESM1_ESM.docx -
Pouw, W., Van Gog, T., & Paas, F. (2014). An embedded and embodied cognition review of instructional manipulatives. Educational Psychology Review, 26, 51-72. doi:10.1007/s10648-014-9255-5.
Abstract
Recent literature on learning with instructional manipulatives seems to call for a moderate view on the effects of perceptual and interactive richness of instructional manipulatives on learning. This “moderate view” holds that manipulatives’ perceptual and interactive richness may compromise learning in two ways: (1) by imposing a very high cognitive load on the learner, and (2) by hindering drawing of symbolic inferences that are supposed to play a key role in transfer (i.e., application of knowledge to new situations in the absence of instructional manipulatives). This paper presents a contrasting view. Drawing on recent insights from Embedded Embodied perspectives on cognition, it is argued that (1) perceptual and interactive richness may provide opportunities for alleviating cognitive load (Embedded Cognition), and (2) transfer of learning is not reliant on decontextualized knowledge but may draw on previous sensorimotor experiences of the kind afforded by perceptual and interactive richness of manipulatives (Embodied Cognition). By negotiating the Embedded Embodied Cognition view with the moderate view, implications for research are derived. -
Pouw, W., De Nooijer, J. A., Van Gog, T., Zwaan, R. A., & Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Frontiers in Psychology, 5: 359. doi:10.3389/fpsyg.2014.00359.
Abstract
Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures. -
Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.
Abstract
Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities. -
Puccini, D. (2013). The use of deictic versus representational gestures in infancy. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Rahmany, R., Marefat, H., & Kidd, E. (2014). Resumptive elements aid comprehension of object relative clauses: evidence from Persian. Journal of Child Language, 41(4), 937-48. doi:10.1017/s0305000913000147.
-
Ravignani, A., Sonnweber, R.-S., Stobbe, N., & Fitch, W. T. (2013). Action at a distance: Dependency sensitivity in a New World primate. Biology Letters, 9(6): 0130852. doi:10.1098/rsbl.2013.0852.
Abstract
Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans’ last common ancestor with squirrel monkeys, and perhaps before. -
Ravignani, A., Bowling, D. L., & Fitch, W. T. (2014). Chorusing, synchrony, and the evolutionary functions of rhythm. Frontiers in Psychology, 5: 1118. doi:10.3389/fpsyg.2014.01118.
Abstract
A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language. -
Ravignani, A. (2014). Chronometry for the chorusing herd: Hamilton's legacy on context-dependent acoustic signalling—a comment on Herbers (2013). Biology Letters, 10(1): 20131018. doi:10.1098/rsbl.2013.1018.
-
Ravignani, A., Olivera, M. V., Gingras, B., Hofer, R., Hernandez, R. C., Sonnweber, R. S., & Fitch, T. W. (2013). Primate drum kit: A system for studying acoustic pattern production by non-human primates using acceleration and strain sensors. Sensors, 13(8), 9790-9820. doi:10.3390/s130809790.
Abstract
The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments. -
Ravignani, A., Gingras, B., Asano, R., Sonnweber, R., Matellan, V., & Fitch, W. T. (2013). The evolution of rhythmic cognition: New perspectives and technologies in comparative research. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 1199-1204). Austin,TX: Cognitive Science Society.Abstract
Music is a pervasive phenomenon in human culture, and musical
rhythm is virtually present in all musical traditions. Research
on the evolution and cognitive underpinnings of rhythm
can benefit from a number of approaches. We outline key concepts
and definitions, allowing fine-grained analysis of rhythmic
cognition in experimental studies. We advocate comparative
animal research as a useful approach to answer questions
about human music cognition and review experimental evidence
from different species. Finally, we suggest future directions
for research on the cognitive basis of rhythm. Apart from
research in semi-natural setups, possibly allowed by “drum set
for chimpanzees” prototypes presented here for the first time,
mathematical modeling and systematic use of circular statistics
may allow promising advances. -
Ravignani, A., Bowling, D., & Kirby, S. (2014). The psychology of biological clocks: A new framework for the evolution of rhythm. In E. A. Cartmill, S. G. Roberts, & H. Lyn (
Eds. ), The Evolution of Language: Proceedings of the 10th International Conference (pp. 262-269). Singapore: World Scientific. -
Ravignani, A., Martins, M., & Fitch, W. T. (2014). Vocal learning, prosody, and basal ganglia: Don't underestimate their complexity. Behavioral and Brain Sciences, 37(6), 570-571. doi:10.1017/S0140525X13004184.
Abstract
In response to: Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective
Abstract:
Ackermann et al.'s arguments in the target article need sharpening and rethinking at both mechanistic and evolutionary levels. First, the authors' evolutionary arguments are inconsistent with recent evidence concerning nonhuman animal rhythmic abilities. Second, prosodic intonation conveys much more complex linguistic information than mere emotional expression. Finally, human adults' basal ganglia have a considerably wider role in speech modulation than Ackermann et al. surmise.
-
Redmann, A., FitzPatrick, I., Hellwig, F. M., & Indefrey, P. (2014). The use of conceptual components in language production: an ERP study. Frontiers in Psychology, 5: 363. doi:10.3389/fpsyg.2014.00363.
Abstract
According to frame-theory, concepts can be represented as structured frames that contain conceptual attributes (e.g., "color") and their values (e.g., "red"). A particular color value can be seen as a core conceptual component for (high color-diagnostic; HCD) objects (e.g., bananas) which are strongly associated with a typical color, but less so for (low color-diagnostic; LCD) objects (e.g., bicycles) that exist in many different colors. To investigate whether the availability of a core conceptual component (color) affects lexical access in language production, we conducted two experiments on the naming of visually presented HCD and LCD objects. Experiment 1 showed that, when naming latencies were matched for colored HCD and LCD objects, achromatic HCD objects were named more slowly than achromatic LCD objects. In Experiment 2 we recorded ERPs while participants performed a picture-naming task, in which achromatic target pictures were either preceded by an appropriately colored box (primed condition) or a black and white checkerboard (unprimed condition). We focused on the P2 component, which has been shown to reflect difficulty of lexical access in language production. Results showed that HCD resulted in slower object-naming and a more pronounced P2. Priming also yielded a more positive P2 but did not result in an RT difference. ERP waveforms on the P1, P2 and N300 components showed a priming by color-diagnosticity interaction, the effect of color priming being stronger for HCD objects than for LCD objects. The effect of color-diagnosticity on the P2 component suggests that the slower naming of achromatic HCD objects is (at least in part) due to more difficult lexical retrieval. Hence, the color attribute seems to affect lexical retrieval in HCD words. The interaction between priming and color-diagnosticity indicates that priming with a feature hinders lexical access, especially if the feature is a core feature of the target object. -
Reesink, G. (2013). Expressing the GIVE event in Papuan languages: A preliminary survey. Linguistic Typology, 17(2), 217-266. doi:10.1515/lity-2013-0010.
Abstract
The linguistic expression of the GIVE event is investigated in a sample of 72 Papuan languages, 33 belonging to the Trans New Guinea family, 39 of various non-TNG lineages. Irrespective of the verbal template (prefix, suffix, or no indexation of undergoer), in the majority of languages the recipient is marked as the direct object of a monotransitive verb, which sometimes involves stem suppletion for the recipient. While a few languages allow verbal affixation for all three arguments, a number of languages challenge the universal claim that the `give' verb always has three arguments. -
Reesink, G. (2014). Topic management and clause combination in the Papuan language Usan. In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (
Eds. ), Information Structure and Reference Tracking in Complex Sentences. (pp. 231-262). Amsterdam: John Benjamins.Abstract
This chapter describes topic management in the Papuan language Usan. The notion of ‘topic’ is defined by its pre-theoretical meaning ‘what someone’s speech is about’. This notion cannot be restricted to simple clausal or sentential constructions, but requires the wider context of long stretches of natural text. The tracking of a topic is examined in its relationship to clause combining mechanisms. Coordinating clause chaining with its switch reference mechanism is contrasted with subordinating strategies called ‘domain-creating’ constructions. These different strategies are identified by language-specific signals, such as intonation and morphosyntactic cues like nominalizations and scope of negation and other modalities. -
Regier, T., Khetarpal, N., & Majid, A. (2013). Inferring semantic maps. Linguistic Typology, 17, 89-105. doi:10.1515/lity-2013-0003.
Abstract
Semantic maps are a means of representing universal structure underlying crosslanguage semantic variation. However, no algorithm has existed for inferring a graph-based semantic map from data. Here, we note that this open problem is formally identical to the known problem of inferring a social network from disease outbreaks. From this identity it follows that semantic map inference is computationally intractable, but that an efficient approximation algorithm for it exists. We demonstrate that this algorithm produces sensible semantic maps from two existing bodies of data. We conclude that universal semantic graph structure can be automatically approximated from cross-language semantic data. -
Reifegerste, J. (2014). Morphological processing in younger and older people: Evidence for flexible dual-route access. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Reinisch, E., Weber, A., & Mitterer, H. (2013). Listeners retune phoneme categories across languages. Journal of Experimental Psychology: Human Perception and Performance, 39, 75-86. doi:10.1037/a0027979.
Abstract
Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge. We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1). During a Dutch lexical-decision task, German and Dutch listeners were exposed to unusual pronunciation variants in which word-final /f/ or /s/ was replaced by an ambiguous sound. At test, listeners categorized Dutch minimal word pairs ending in sounds along an /f/–/s/ continuum. Dutch L1 and German L2 listeners showed boundary shifts of a similar magnitude. Moreover, following exposure to Dutch-accented English, Dutch listeners also showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test. The former result suggests that lexical representations in a second language are specific enough to support lexically guided retuning, and the latter implies that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages. -
Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41, 101-116. doi:10.1016/j.wocn.2013.01.002.
Abstract
Speech perception is dependent on auditory information within phonemes such as spectral or temporal cues. The perception of those cues, however, is affected by auditory information in surrounding context (e.g., a fast context sentence can make a target vowel sound subjectively longer). In a two-by-two design the current experiments investigated when these different factors influence vowel perception. Dutch listeners categorized minimal word pairs such as /tɑk/–/taːk/ (“branch”–“task”) embedded in a context sentence. Critically, the Dutch /ɑ/–/aː/ contrast is cued by spectral and temporal information. We varied the second formant (F2) frequencies and durations of the target vowels. Independently, we also varied the F2 and duration of all segments in the context sentence. The timecourse of cue uptake on the targets was measured in a printed-word eye-tracking paradigm. Results show that the uptake of spectral cues slightly precedes the uptake of temporal cues. Furthermore, acoustic manipulations of the context sentences influenced the uptake of cues in the target vowel immediately. That is, listeners did not need additional time to integrate spectral or temporal cues of a target sound with auditory information in the context. These findings argue for an early locus of contextual influences in speech perception. -
Reinisch, E., Jesse, A., & Nygaard, L. C. (2013). Tone of voice guides word learning in informative referential contexts. Quarterly Journal of Experimental Psychology, 66, 1227-1240. doi:10.1080/17470218.2012.736525.
Abstract
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning. -
Riedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A. and 9 moreRiedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A., Buechner, O., Manzano, C., Memon, S., Memon, S., Helin, H., Suhonen, J., Lecarpentier, D., Koski, K., & Lippert, T. (2013). A data infrastructure reference model with applications: Towards realization of a ScienceTube vision with a data replication service. Journal of Internet Services and Applications, 4, 1-17. doi:10.1186/1869-0238-4-1.
Abstract
The wide variety of scientific user communities work with data since many years and thus have already a wide variety of data infrastructures in production today. The aim of this paper is thus not to create one new general data architecture that would fail to be adopted by each and any individual user community. Instead this contribution aims to design a reference model with abstract entities that is able to federate existing concrete infrastructures under one umbrella. A reference model is an abstract framework for understanding significant entities and relationships between them and thus helps to understand existing data infrastructures when comparing them in terms of functionality, services, and boundary conditions. A derived architecture from such a reference model then can be used to create a federated architecture that builds on the existing infrastructures that could align to a major common vision. This common vision is named as ’ScienceTube’ as part of this contribution that determines the high-level goal that the reference model aims to support. This paper will describe how a well-focused use case around data replication and its related activities in the EUDAT project aim to provide a first step towards this vision. Concrete stakeholder requirements arising from scientific end users such as those of the European Strategy Forum on Research Infrastructure (ESFRI) projects underpin this contribution with clear evidence that the EUDAT activities are bottom-up thus providing real solutions towards the so often only described ’high-level big data challenges’. The followed federated approach taking advantage of community and data centers (with large computational resources) further describes how data replication services enable data-intensive computing of terabytes or even petabytes of data emerging from ESFRI projects. -
Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G. and 184 moreRietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G., de Leeuw, C., Eklund, N., Evans, D. S., Ferhmann, R., Fischer, K., Gieger, C., Gjessing, H. K., Hägg, S., Harris, J. R., Hayward, C., Holzapfel, C., Ibrahim-Verbaas, C. A., Ingelsson, E., Jacobsson, B., Joshi, P. K., Jugessur, A., Kaakinen, M., Kanoni, S., Karjalainen, J., Kolcic, I., Kristiansson, K., Kutalik, Z., Lahti, J., Lee, S. H., Lin, P., Lind, P. A., Liu, Y., Lohman, K., Loitfelder, M., McMahon, G., Vidal, P. M., Meirelles, O., Milani, L., Myhre, R., Nuotio, M.-L., Oldmeadow, C. J., Petrovic, K. E., Peyrot, W. J., Polasek, O., Quaye, L., Reinmaa, E., Rice, J. P., Rizzi, T. S., Schmidt, H., Schmidt, R., Smith, A. V., Smith, J. A., Tanaka, T., Terracciano, A., van der Loos, M. J. H. M., Vitart, V., Völzke, H., Wellmann, J., Yu, L., Zhao, W., Allik, J., Attia, J. R., Bandinelli, S., Bastardot, F., Beauchamp, J., Bennett, D. A., Berger, K., Bierut, L. J., Boomsma, D. I., Bültmann, U., Campbell, H., Chabris, C. F., Cherkas, L., Chung, M. K., Cucca, F., de Andrade, M., De Jager, P. L., De Neve, J.-E., Deary, I. J., Dedoussis, G. V., Deloukas, P., Dimitriou, M., Eiríksdóttir, G., Elderson, M. F., Eriksson, J. G., Evans, D. M., Faul, J. D., Ferrucci, L., Garcia, M. E., Grönberg, H., Guðnason, V., Hall, P., Harris, J. M., Harris, T. B., Hastie, N. D., Heath, A. C., Hernandez, D. G., Hoffmann, W., Hofman, A., Holle, R., Holliday, E. G., Hottenga, J.-J., Iacono, W. G., Illig, T., Järvelin, M.-R., Kähönen, M., Kaprio, J., Kirkpatrick, R. M., Kowgier, M., Latvala, A., Launer, L. J., Lawlor, D. A., Lehtimäki, T., Li, J., Lichtenstein, P., Lichtner, P., Liewald, D. C., Madden, P. A., Magnusson, P. K. E., Mäkinen, T. E., Masala, M., McGue, M., Metspalu, A., Mielck, A., Miller, M. B., Montgomery, G. W., Mukherjee, S., Nyholt, D. R., Oostra, B. A., Palmer, L. J., Palotie, A., Penninx, B. W. J. H., Perola, M., Peyser, P. A., Preisig, M., Räikkönen, K., Raitakari, O. T., Realo, A., Ring, S. M., Ripatti, S., Rivadeneira, F., Rudan, I., Rustichini, A., Salomaa, V., Sarin, A.-P., Schlessinger, D., Scott, R. J., Snieder, H., St Pourcain, B., Starr, J. M., Sul, J. H., Surakka, I., Svento, R., Teumer, A., Tiemeier, H., van Rooij, F. J. A., Van Wagoner, D. R., Vartiainen, E., Viikari, J., Vollenweider, P., Vonk, J. M., Waeber, G., Weir, D. R., Wichmann, H.-E., Widen, E., Willemsen, G., Wilson, J. F., Wright, A. F., Conley, D., Davey-Smith, G., Franke, L., Groenen, P. J. F., Hofman, A., Johannesson, M., Kardia, S. L. R., Krueger, R. F., Laibson, D., Martin, N. G., Meyer, M. N., Posthuma, D., Thurik, A. R., Timpson, N. J., Uitterlinden, A. G., van Duijn, C. M., Visscher, P. M., Benjamin, D. J., Cesarini, D., Koellinger, P. D., & Study LifeLines Cohort (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 340(6139), 1467-1471. doi:10.1126/science.1235488.
Abstract
A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R(2) ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics.Additional information
Rietveld.SM.revision.2.pdf -
Roberts, S. G., Dediu, D., & Levinson, S. C. (2014). Detecting differences between the languages of Neandertals and modern humans. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (
Eds. ), The Evolution of Language: Proceedings of the 10th International Conference (pp. 501-502). Singapore: World Scientific.Abstract
Dediu and Levinson (2013) argue that Neandertals had essentially modern language and speech, and that they were in genetic contact with the ancestors of modern humans during our dispersal out of Africa. This raises the possibility of cultural and linguistic contact between the two human lineages. If such contact did occur, then it might have influenced the cultural evolution of the languages. Since the genetic traces of contact with Neandertals are limited to the populations outside of Africa, Dediu & Levinson predict that there may be structural differences between the present-day languages derived from languages in contact with Neanderthals, and those derived from languages that were not influenced by such contact. Since the signature of such deep contact might reside in patterns of features, they suggested that machine learning methods may be able to detect these differences. This paper attempts to test this hypothesis and to estimate particular linguistic features that are potential candidates for carrying a signature of Neandertal languages. -
Roberts, L. (2013). Discourse processing. In P. Robinson (
Ed. ), The Routledge encyclopedia of second language acquisition (pp. 190-194). New York: Routledge. -
Roberts, S. G. (2013). [Review of the book The Language of Gaming by A. Ensslin]. Discourse & Society, 24(5), 651-653. doi:10.1177/0957926513487819a.
-
Roberts, S. G. (2013). A Bottom-up approach to the cultural evolution of bilingualism. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1229-1234). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0236/index.html.Abstract
The relationship between individual cognition and cultural phenomena at the society level can be transformed by cultural transmission (Kirby, Dowman, & Griffiths, 2007). Top-down models of this process have typically assumed that individuals only adopt a single linguistic trait. Recent extensions include ‘bilingual’ agents, able to adopt multiple linguistic traits (Burkett & Griffiths, 2010). However, bilingualism is more than variation within an individual: it involves the conditional use of variation with different interlocutors. That is, bilingualism is a property of a population that emerges from use. A bottom-up simulation is presented where learners are sensitive to the identity of other speakers. The simulation reveals that dynamic social structures are a key factor for the evolution of bilingualism in a population, a feature that was abstracted away in the top-down models. Top-down and bottom-up approaches may lead to different answers, but can work together to reveal and explore important features of the cultural transmission process. -
Roberts, S. G., & De Vos, C. (2014). Gene-culture coevolution of a linguistic system in two modalities. In B. De Boer, & T. Verhoef (
Eds. ), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 23-27).Abstract
Complex communication can take place in a range of modalities such as auditory, visual, and tactile modalities. In a very general way, the modality that individuals use is constrained by their biological biases (humans cannot use magnetic fields directly to communicate to each other). The majority of natural languages have a large audible component. However, since humans can learn sign languages just as easily, it’s not clear to what extent the prevalence of spoken languages is due to biological biases, the social environment or cultural inheritance. This paper suggests that we can explore the relative contribution of these factors by modelling the spontaneous emergence of sign languages that are shared by the deaf and hearing members of relatively isolated communities. Such shared signing communities have arisen in enclaves around the world and may provide useful insights by demonstrating how languages evolve as the deaf proportion of its members has strong biases towards the visual language modality. In this paper we describe a model of cultural evolution in two modalities, combining aspects that are thought to impact the emergence of sign languages in a more general evolutionary framework. The model can be used to explore hypotheses about how sign languages emerge. -
Roberts, S. G., Dediu, D., & Moisik, S. R. (2014). How to speak Neanderthal. New Scientist, 222(2969), 40-41. doi:10.1016/S0262-4079(14)60970-2.
-
Roberts, S. G. (2014). Monolingual Biases in Simulations of Cultural Transmission. In V. Dignum, & F. Dignum (
Eds. ), Perspectives on Culture and Agent-based Simulations (pp. 111-125). Cham: Springer. doi:10.1007/978-3-319-01952-9_7.Abstract
Recent research suggests that the evolution of language is affected by the inductive biases of its learners. I suggest that there is an implicit assumption that one of these biases is to expect a single linguistic system in the input. Given the prevalence of bilingual cultures, this may not be a valid abstraction. This is illustrated by demonstrating that the ‘minimal naming game’ model, in which a shared lexicon evolves in a population of agents, includes an implicit mutual exclusivity bias. Since recent research suggests that children raised in bilingual cultures do not exhibit mutual exclusivity, the individual learning algorithm of the agents is not as abstract as it appears to be. A modification of this model demonstrates that communicative success can be achieved without mutual exclusivity. It is concluded that complex cultural phenomena, such as bilingualism, do not necessarily result from complex individual learning mechanisms. Rather, the cultural process itself can bring about this complexity. -
Roberts, S. G., & Winters, J. (2013). Linguistic diversity and traffic accidents: Lessons from statistical studies of cultural traits. PLoS One, 8(8): e70902. doi:doi:10.1371/journal.pone.0070902.
Abstract
The recent proliferation of digital databases of cultural and linguistic data, together with new statistical techniques becoming available has lead to a rise in so-called nomothetic studies [1]–[8]. These seek relationships between demographic variables and cultural traits from large, cross-cultural datasets. The insights from these studies are important for understanding how cultural traits evolve. While these studies are fascinating and are good at generating testable hypotheses, they may underestimate the probability of finding spurious correlations between cultural traits. Here we show that this kind of approach can find links between such unlikely cultural traits as traffic accidents, levels of extra-martial sex, political collectivism and linguistic diversity. This suggests that spurious correlations, due to historical descent, geographic diffusion or increased noise-to-signal ratios in large datasets, are much more likely than some studies admit. We suggest some criteria for the evaluation of nomothetic studies and some practical solutions to the problems. Since some of these studies are receiving media attention without a widespread understanding of the complexities of the issue, there is a risk that poorly controlled studies could affect policy. We hope to contribute towards a general skepticism for correlational studies by demonstrating the ease of finding apparently rigorous correlations between cultural traits. Despite this, we see well-controlled nomothetic studies as useful tools for the development of theories.Additional information
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0070902#s10 -
Roberts, L. (2013). Processing of gender and number agreement in late Spanish bilinguals: A commentary on Sagarra and Herschensohn. International Journal of Bilingualism, 17(5), 628-633. doi:10.1177/1367006911435693.
Abstract
Sagarra and Herschensohn’s article examines English L2 learners’ knowledge of Spanish gender and number agreement and their sensitivity to gender and number agreement violations (e.g. *El ingeniero presenta el prototipo *famosa/*famosos en la conferencia) during real-time sentence processing. It raises some interesting questions that are central to both acquisition and processing research. In the following paper, I discuss a selection of these topics, for instance, what types of knowledge may or may not be available/accessible during real-time L2 processing at different proficiency levels, what the differences may be between the processing of number versus gender concord, and perhaps most importantly, the problem of how to characterize the relationship between the grammar and the parser, both in general terms and in the context of language acquisition. -
Roberts, L., Matsuo, A., & Duffield, N. (2013). Processing VP-ellipsis and VP-anaphora with structurally parallel and nonparallel antecedents: An eyetracking study. Language and Cognitive Processes, 28, 29-47. doi:10.1080/01690965.2012.676190.
Abstract
In this paper, we report on an eye-tracking study investigating the processing of English VP-ellipsis (John took the rubbish out. Fred did [] too) (VPE) and VP-anaphora (John took the rubbish out. Fred did it too) (VPA) constructions, with syntactically parallel versus nonparallel antecedent clauses (e.g., The rubbish was taken out by John. Fred did [] too/Fred did it too). The results show first that VPE involves greater processing costs than VPA overall. Second, although the structural nonparallelism of the antecedent clause elicited a processing cost for both anaphor types, there was a difference in the timing and the strength of this parallelism effect: it was earlier and more fleeting for VPA, as evidenced by regression path times, whereas the effect occurred later with VPE completions, showing up in second and total fixation times measures, and continuing on into the reading of the adjacent text. Taking the observed differences between the processing of the two anaphor types together with other research findings in the literature, we argue that our data support the idea that in the case of VPE, the VP from the antecedent clause necessitates more computation at the elision site before it is linked to its antecedent than is the case for VPA.Files private
Request files -
Roberts, S. G., & Quillinan, J. (2014). The Chimp Challenge: Working memory in chimps and humans. In L. McCrohon, B. Thompson, T. Verhoef, & H. Yamauchi (
Eds. ), The Past, Present and Future of Language Evolution Research: Student volume of the 9th International Conference on the Evolution of Language (pp. 31-39). Tokyo: EvoLang9 Organising Committee.Abstract
Matsuzawa (2012) presented work at Evolang demonstrating the working memory abilities of chimpanzees. (Inoue & Matsuzawa, 2007) found that chimpanzees can correctly remember the location of 9 randomly arranged numerals displayed for 210ms - shorter than an average human eye saccade. Humans, however, perform poorly at this task. Matsuzawa suggests a semantic link hypothesis: while chimps have good visual, eidetic memory, humans are good at symbolic associations. The extra information in the semantic, linguistic links that humans possess increase the load on working memory and make this task difficult for them. We were interested to see if a wider search could find humans that matched the performance of the chimpanzees. We created an online version of the experiment and challenged people to play. We also attempted to run a non-semantic version of the task to see if this made the task easier. We found that, while humans can perform better than Inoue and Matsuzawa (2007) suggest, chimpanzees can perform better still. We also found no evidence to support the semantic link hypothesis. -
Roberts, L. (2013). Sentence processing in bilinguals. In R. Van Gompel (
Ed. ), Sentence processing. London: Psychology Press. -
Roberts, S. G., Thompson, B., & Smith, K. (2014). Social interaction influences the evolution of cognitive biases for language. In E. A. Cartmill, S. G. Roberts, & H. Lyn (
Eds. ), The Evolution of Language: Proceedings of the 10th International Conference (pp. 278-285). Singapore: World Scientific. doi:0.1142/9789814603638_0036.Abstract
Models of cultural evolution demonstrate that the link between individual biases and population- level phenomena can be obscured by the process of cultural transmission (Kirby, Dowman, & Griffiths, 2007). However, recent extensions to these models predict that linguistic diversity will not emerge and that learners should evolve to expect little linguistic variation in their input (Smith & Thompson, 2012). We demonstrate that this result derives from assumptions that privilege certain kinds of social interaction by exploring a range of alternative social models. We find several evolutionary routes to linguistic diversity, and show that social interaction not only influences the kinds of biases which could evolve to support language, but also the effects those biases have on a linguistic system. Given the same starting situation, the evolution of biases for language learning and the distribution of linguistic variation are affected by the kinds of social interaction that a population privileges. -
Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.
Abstract
The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions -
Roelofs, A., & Piai, V. (2013). Associative facilitation in the Stroop task: Comment on Mahon et al. Cortex, 49, 1767-1769. doi:10.1016/j.cortex.2013.03.001.
Abstract
First paragraph: A fundamental issue in psycholinguistics concerns how speakers retrieve intended words from long-term memory. According to a selection by competition account (e.g., Levelt
et al., 1999), conceptually driven word retrieval involves the activation of a set of candidate words and a competitive selection
of the intended word from this set. -
Roelofs, A., Piai, V., & Schriefers, H. (2013). Context effects and selective attention in picture naming and word reading: Competition versus response exclusion. Language and Cognitive Processes, 28, 655-671. doi:10.1080/01690965.2011.615663.
Abstract
For several decades, context effects in picture naming and word reading have been extensively investigated. However, researchers have found no agreement on the explanation of the effects. Whereas it has long been assumed that several types of effect reflect competition in word selection, recently it has been argued that these effects reflect the exclusion of articulatory responses from an output buffer. Here, we first critically evaluate the findings on context effects in picture naming that have been taken as evidence against the competition account, and we argue that the findings are, in fact, compatible with the competition account. Moreover, some of the findings appear to challenge rather than support the response exclusion account. Next, we compare the response exclusion and competition accounts with respect to their ability to explain data on word reading. It appears that response exclusion does not account well for context effects on word reading times, whereas computer simulations reveal that a competition model like WEAVER++ accounts for the findings.Files private
Request files -
Roelofs, A., Dijkstra, T., & Gerakaki, S. (2013). Modeling of word translation: Activation flow from concepts to lexical items. Bilingualism: Language and Cognition, 16, 343-353. doi:10.1017/S1366728912000612.
Abstract
Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to support the discrete-flow model. Here, we report results of computer simulations with the continuous-flow WEAVER++ model (Roelofs, 1992, 2006) demonstrating that the empirical observation taken to be in favor of discrete models is, in fact, only consistent with those models and equally compatible with more continuous models of word production by monolingual and bilingual speakers. Continuous models are specifically and independently supported by other empirical evidence on the effect of context pictures on native word production. -
Roelofs, A., Piai, V., & Schriefers, H. (2013). Selection by competition in word production: Rejoinder to Janssen (2012). Language and Cognitive Processes, 28, 679-683. doi:10.1080/01690965.2013.770890.
Abstract
Roelofs, Piai, and Schriefers argue that several findings on the effect of distractor words and pictures in producing words support a selection-by-competition account and challenge a non-competitive response-exclusion account. Janssen argues that the findings do not challenge response exclusion, and he conjectures that both competitive and non-competitive mechanisms underlie word selection. Here, we maintain that the findings do challenge the response-exclusion account and support the assumption of a single competitive mechanism underlying word selection.Files private
Request files -
Rojas-Berscia, L. M. (2014). A Heritage Reference Grammar of Selk’nam. Master Thesis, Radboud University, Nijmegen.
-
Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.
Abstract
In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering -
Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not routinely influence performance during language processing. Psychological Science, 24, 2218-2225. doi:10.1177/0956797613490746.
Abstract
The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.Additional information
DS_10.1177_0956797613490746.pdf -
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to. Neuropsychologia, 51(3), 437-447. doi:10.1016/j.neuropsychologia.2012.12.002.
Abstract
When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, “In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice”). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance. -
Rommers, J., Dijkstra, T., & Bastiaansen, M. C. M. (2013). Context-dependent semantic processing in the human brain: Evidence from idiom comprehension. Journal of Cognitive Neuroscience, 25(5), 762-776. doi:10.1162/jocn_a_00337.
Abstract
Language comprehension involves activating word meanings and integrating them with the sentence context. This study examined whether these routines are carried out even when they are theoretically unnecessary, namely in the case of opaque idiomatic expressions, for which the literal word meanings are unrelated to the overall meaning of the expression. Predictable words in sentences were replaced by a semantically related or unrelated word. In literal sentences, this yielded previously established behavioral and electrophysiological signatures of semantic processing: semantic facilitation in lexical decision, a reduced N400 for semantically related relative to unrelated words, and a power increase in the gamma frequency band that was disrupted by semantic violations. However, the same manipulations in idioms yielded none of these effects. Instead, semantic violations elicited a late positivity in idioms. Moreover, gamma band power was lower in correct idioms than in correct literal sentences. It is argued that the brain's semantic expectancy and literal word meaning integration operations can, to some extent, be “switched off” when the context renders them unnecessary. Furthermore, the results lend support to models of idiom comprehension that involve unitary idiom representations. -
Rommers, J. (2013). Seeing what's next: Processing and anticipating language referring to objects. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Roorda, D., Kalkman, G., Naaijer, M., & Van Cranenburgh, A. (2014). LAF-Fabric: A data analysis tool for linguistic annotation framework with an application to the Hebrew Bible. Computational linguistics in the Netherlands, 4, 105-120.
Abstract
The Linguistic Annotation Framework (LAF) provides a general, extensible stand-o markup system for corpora. This paper discusses LAF-Fabric, a new tool to analyse LAF resources in general with an extension to process the Hebrew Bible in particular. We rst walk through the history of the Hebrew Bible as text database in decennium-wide steps. Then we describe how LAF-Fabric may serve as an analysis tool for this corpus. Finally, we describe three analytic projects/work ows that benet from the new LAF representation: 1) the study of linguistic variation: extract cooccurrence data of common nouns between the books of the Bible (Martijn Naaijer); 2) the study of the grammar of Hebrew poetry in the Psalms: extract clause typology (Gino Kalkman); 3) construction of a parser of classical Hebrew by Data Oriented Parsing: generate tree structures from the database (Andreas van Cranenburgh). -
Rossano, F. (2013). Gaze in conversation. In J. Sidnell, & T. Stivers (
Eds. ), The handbook of conversation analysis (pp. 308-329). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch15.Abstract
This chapter contains sections titled: Introduction Background: The Gaze “Machinery” Gaze “Machinery” in Social Interaction Future Directions -
Rossano, F., Carpenter, M., & Tomasello, M. (2013). One-year-old infants follow others’ voice direction. Psychological Science, 23, 1298-1302. doi:10.1177/0956797612450032.
Abstract
We investigated 1-year-old infants’ ability to infer an adult’s focus of attention solely on the basis of her voice direction. In Studies 1 and 2, 12- and 16-month-olds watched an adult go behind a barrier and then heard her verbally express excitement about a toy hidden in one of two boxes at either end of the barrier. Even though they could not see the adult, infants of both ages followed her voice direction to the box containing the toy. Study 2 showed that infants could do this even when the adult was positioned closer to the incorrect box while she vocalized toward the correct one (and thus ruled out the possibility that infants were merely approaching the source of the sound). In Study 3, using the same methods as in Study 2, we found that chimpanzees performed the task at chance level. Our results show that infants can determine the focus of another person’s attention through auditory information alone—a useful skill for establishing joint attention.Additional information
Rossano_Suppl_Mat.pdf -
Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (
Eds. ), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.Abstract
In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests. -
Roswandowitz, C., Mathias, S. R., Hintz, F., Kreitewolf, J., Schelinski, S., & von Kriegstein, K. (2014). Two cases of selective developmental voice-recognition impairments. Current Biology, 24(19), 2348-2353. doi:10.1016/j.cub.2014.08.048.
Abstract
Recognizing other individuals is an essential skill in humans and in other species [1, 2 and 3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.Additional information
http://www.sciencedirect.com/science/article/pii/S0960982214010616#appd002Files private
Request files -
Rowbotham, S., Wardy, A. J., Lloyd, D. M., Wearden, A., & Holler, J. (2014). Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures. PLoS One, 9(10): e110779. doi:10.1371/journal.pone.0110779.
Abstract
Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain. -
Rowbotham, S., Holler, J., Lloyd, D., & Wearden, A. (2014). Handling pain: The semantic interplay of speech and co-speech hand gestures in the description of pain sensations. Speech Communication, 57, 244-256. doi:10.1016/j.specom.2013.04.002.
Abstract
Pain is a private and subjective experience about which effective communication is vital, particularly in medical settings. Speakers often represent information about pain sensation in both speech and co-speech hand gestures simultaneously, but it is not known whether gestures merely replicate spoken information or complement it in some way. We examined the representational contribution
of gestures in a range of consecutive analyses. Firstly, we found that 78% of speech units containing pain sensation were accompanied by gestures, with 53% of these gestures representing pain sensation. Secondly, in 43% of these instances, gestures represented pain sensation information that was not contained in speech, contributing additional, complementary information to the pain sensation message.
Finally, when applying a specificity analysis, we found that in contrast with research in different domains of talk, gestures did not make the pain sensation information in speech more specific. Rather, they complemented the verbal pain message by representing different
aspects of pain sensation, contributing to a fuller representation of pain sensation than speech alone. These findings highlight the importance of gestures in communicating about pain sensation and suggest that this modality provides additional information to supplement and clarify the often ambiguous verbal pain messageFiles private
Request files -
Rowland, C. F., Noble, C. H., & Chan, A. (2014). Competition all the way down: How children learn word order cues to sentence meaning. In B. MacWhinney, A. Malchukov, & E. Moravcsik (
Eds. ), Competing Motivations in Grammar and Usage (pp. 125-143). Oxford: Oxford University Press.Abstract
Most work on competing cues in language acquisition has focussed on what happens when cues compete within a certain construction. There has been far less work on what happens when constructions themselves compete. The aim of the present chapter was to explore how the acquisition mechanism copes when constructions compete in a language. We present three experimental studies, all of which focus on the acquisition of the syntactic function of word order as a marker of the Theme-Recipient relation in ditransitives (form-meaning mapping). In Study 1 we investigated how quickly English children acquire form-meaning mappings when there are two competing structures in the language. We demonstrated that English speaking 4-year- olds, but not 3-year-olds, correctly interpreted both preposition al and double object datives, assigning Theme and Recipient participant roles on the basis of word order cues. There was no advantage for the double object dative despite its greater frequency in child directed speech. In Study 2 we looked at acquisition in a language which has no dative alternation –Welsh–to investigate how quickly children acquire form-meaning mapping when there is no competing structure. We demonstrated that Welsh children (Study 2) acquired the prepositional dative at age 3 years, which was much earlier than English children. Finally, in Study 3 we examined bei2 (give) ditransitives in Cantonese, to investigate what happens when there is no dative alternation (as in Welsh), but when the child hears alternative, and possibly competing, word orders in the input. Like the English 3-year-olds, the Cantonese 3-year-olds had not yet acquired the word order marking constraints of bei2 ditransitives. We conclude that there is not only competition between cues but competition between constructions in language acquisition. We suggest an extension to the competition model (Bates & MacWhinney, 1982) whereby generalisations take place across constructions as easily as they take place within constructions, whenever there are salient similarities to form the basis of the generalisation. -
Rowland, C. F. (2014). Understanding Child Language Acquisition. Abingdon: Routledge.
Abstract
Taking an accessible and cross-linguistic approach, Understanding Child Language Acquisition introduces readers to the most important research on child language acquisition over the last fifty years, as well as to some of the most influential theories in the field. Rather than just describing what children can do at different ages, Rowland explains why these research findings are important and what they tell us about how children acquire language. Key features include: Cross-linguistic analysis of how language acquisition differs between languages A chapter on how multilingual children acquire several languages at once Exercises to test comprehension Chapters organised around key questions that discuss the critical issues posed by researchers in the field, with summaries at the end Further reading suggestions to broaden understanding of the subject With its particular focus on outlining key similarities and differences across languages and what this cross-linguistic variation means for our ideas about language acquisition, Understanding Child Language Acquisition forms a comprehensive introduction to the subject for students of linguistics, psychology, and speech and language pathology. Students and instructors will benefit from the comprehensive companion website (www.routledge.com/cw/rowland) that includes a students’ section featuring interactive comprehension exercises, extension activities, chapter recaps and answers to the exercises within the book. Material for instructors includes sample essay questions, answers to the extension activities for students and PowerPoint slides including all the figures from the book -
Rubio-Fernández, P. (2013). Associative and inferential processes in pragmatic enrichment: The case of emergent properties. Language and Cognitive Processes, 28(6), 723-745. doi:10.1080/01690965.2012.659264.
Abstract
Experimental research on word processing has generally focused on properties that are associated to a concept in long-term memory (e.g., basketball—round). The present study addresses a related issue: the accessibility of “emergent properties” or conceptual properties that have to be inferred in a given context (e.g., basketball—floats). This investigation sheds light on a current debate in cognitive pragmatics about the number of pragmatic systems that are there (Carston, 2002a, 2007; Recanati, 2004, 2007). Two experiments using a self-paced reading task suggest that inferential processes are fully integrated in the processing system. Emergent properties are accessed early on in processing, without delaying later discourse integration processes. I conclude that the theoretical distinction between explicit and implicit meaning is not paralleled by that between associative and inferential processes. -
Rubio-Fernández, P. (2013). Perspective tracking in progress: Do not disturb. Cognition, 129(2), 264-272. doi:10.1016/j.cognition.2013.07.005.
Abstract
Two experiments tested the hypothesis that indirect false-belief tests allow participants to track a protagonist’s perspective uninterruptedly, whereas direct false-belief tests disrupt the process of perspective tracking in various ways. For this purpose, adults’ performance was compared on indirect and direct false-belief tests by means of continuous eye-tracking. Experiment 1 confirmed that the false-belief question used in direct tests disrupts perspective tracking relative to what is observed in an indirect test. Experiment 2 confirmed that perspective tracking is a continuous process that can be easily disrupted in adults by a subtle visual manipulation in both indirect and direct tests. These results call for a closer analysis of the demands of the false-belief tasks that have been used in developmental research. -
Rubio-Fernández, P., & Geurts, B. (2013). How to pass the false-belief task before your fourth birthday. Psychological Science, 24(1), 27-33. doi:10.1177/0956797612447819.
Abstract
The experimental record of the last three decades shows that children under 4 years old fail all sorts of variations on the standard false-belief task, whereas more recent studies have revealed that infants are able to pass nonverbal versions of the task. We argue that these paradoxical results are an artifact of the type of false-belief tasks that have been used to test infants and children: Nonverbal designs allow infants to keep track of a protagonist’s perspective over a course of events, whereas verbal designs tend to disrupt the perspective-tracking process in various ways, which makes it too hard for younger children to demonstrate their capacity for perspective tracking. We report three experiments that confirm this hypothesis by showing that 3-year-olds can pass a suitably streamlined version of the verbal false-belief task. We conclude that young children can pass the verbal false-belief task provided that they are allowed to keep track of the protagonist’s perspective without too much disruption. -
Rumsey, A., San Roque, L., & Schieffelin, B. (2013). The acquisition of ergative marking in Kaluli, Ku Waru and Duna (Trans New Guinea). In E. L. Bavin, & S. Stoll (
Eds. ), The acquisition of ergativity (pp. 133-182). Amsterdam: Benjamins.Abstract
In this chapter we present material on the acquisition of ergative marking on noun phrases in three languages of Papua New Guinea: Kaluli, Ku Waru, and Duna. The expression of ergativity in all the languages is broadly similar, but sensitive to language-specific features, and this pattern of similarity and difference is reflected in the available acquisition data. Children acquire adult-like ergative marking at about the same pace, reaching similar levels of mastery by 3;00 despite considerable differences in morphological complexity of ergative marking among the languages. What may be more important – as a factor in accounting for the relative uniformity of acquisition in this respect – are the similarities in patterns of interactional scaffolding that emerge from a comparison of the three cases. -
Sadakata, M., & McQueen, J. M. (2014). Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training. Frontiers in Psychology, 5: 1318. doi:10.3389/fpsyg.2014.01318.
Abstract
Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a five-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by 3 speakers) and high (similar to medium but with 5 speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely-matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired. -
Sadakata, M., & McQueen, J. M. (2013). High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates. Journal of the Acoustical Society of America, 134(2), 1324-1335. doi:10.1121/1.4812767.
Abstract
This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis. -
Sakkalou, E., Ellis-Davies, K., Fowler, N., Hilbrink, E., & Gattis, M. (2013). Infants show stability of goal-directed imitation. Journal of Experimental Child Psychology, 114, 1-9. doi:10.1016/j.jecp.2012.09.005.
Abstract
Previous studies have reported that infants selectively reproduce observed actions and have argued that this selectivity reflects understanding of intentions and goals, or goal-directed imitation. We reasoned that if selective imitation of goal-directed actions reflects understanding of intentions, infants should demonstrate stability across perceptually and causally dissimilar imitation tasks. To this end, we employed a longitudinal within-participants design to compare the performance of 37 infants on two imitation tasks, with one administered at 13 months and one administered at 14 months. Infants who selectively imitated goal-directed actions in an object-cued task at 13 months also selectively imitated goal-directed actions in a vocal-cued task at 14 months. We conclude that goal-directed imitation reflects a general ability to interpret behavior in terms of mental states. -
Salomo, D., & Liszkowski, U. (2013). Sociocultural settings influence the emergence of prelinguistic deictic gestures. Child development, 84(4), 1296-1307. doi:10.1111/cdev.12026.
Abstract
Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants were exposed differed systematically across settings, allowing testing for the role of social–interactional input in the ontogeny of prelinguistic gestures. Infants gestured more and at an earlier age depending on the amount of joint action and gestures infants were exposed to, revealing early prelinguistic sociocultural differences. The study shows that the emergence of basic prelinguistic gestures is socially mediated, suggesting that others' actions structure the ontogeny of human communication from early on. -
Sampaio, C., & Konopka, A. E. (2013). Memory for non-native language: The role of lexical processing in the retention of surface form. Memory, 21, 537-544. doi:10.1080/09658211.2012.746371.
Abstract
Research on memory for native language (L1) has consistently shown that retention of surface form is inferior to that of gist (e.g., Sachs, 1967). This paper investigates whether the same pattern is found in memory for non-native language (L2). We apply a model of bilingual word processing to more complex linguistic structures and predict that memory for L2 sentences ought to contain more surface information than L1 sentences. Native and non-native speakers of English were tested on a set of sentence pairs with different surface forms but the same meaning (e.g., “The bullet hit/struck the bull's eye”). Memory for these sentences was assessed with a cued recall procedure. Responses showed that native and non-native speakers did not differ in the accuracy of gist-based recall but that non-native speakers outperformed native speakers in the retention of surface form. The results suggest that L2 processing involves more intensive encoding of lexical level information than L1 processing.Files private
Request files
Share this page