Displaying 501 - 600 of 849
-
Medland, S. E., Zayats, T., Glaser, B., Nyholt, D. R., Gordon, S. D., Wright, M. J., Montgomery, G. W., Campbell, M. J., Henders, A. K., Timpson, N. J., Peltonen, L., Wolke, D., Ring, S. M., Deloukas, P., Martin, N. G., Smith, G. D., & Evans, D. M. (2010). A variant in LIN28B is associated with 2D:4D finger-length ratio, a putative retrospective biomarker of prenatal testosterone exposure. American Journal of Human Genetics, 86(4), 519-525. doi:10.1016/j.ajhg.2010.02.017.
Abstract
The ratio of the lengths of an individual's second to fourth digit (2D:4D) is commonly used as a noninvasive retrospective biomarker for prenatal androgen exposure. In order to identify the genetic determinants of 2D:4D, we applied a genome-wide association approach to 1507 11-year-old children from the Avon Longitudinal Study of Parents and Children (ALSPAC) in whom 2D:4D ratio had been measured, as well as a sample of 1382 12- to 16-year-olds from the Brisbane Adolescent Twin Study. A meta-analysis of the two scans identified a single variant in the LIN28B gene that was strongly associated with 2D:4D (rs314277: p = 4.1 x 10(-8)) and was subsequently independently replicated in an additional 3659 children from the ALSPAC cohort (p = 1.53 x 10(-6)). The minor allele of the rs314277 variant has previously been linked to increased height and delayed age at menarche, but in our study it was associated with increased 2D:4D in the direction opposite to that of previous reports on the correlation between 2D:4D and age at menarche. Our findings call into question the validity of 2D:4D as a simplistic retrospective biomarker for prenatal testosterone exposure.Additional information
http://www.sciencedirect.com/science/article/pii/S0002929710000996#appd002 -
Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.
Abstract
Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band. -
Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.
Abstract
In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains. -
Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.
Abstract
Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics. -
Menenti, L. (2010). The right language: Differential hemispheric contributions to language production and comprehension in context. PhD Thesis, Radboud University Nijmegen, Nijmegen.
-
Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Towards a neural basis of interactive alignment in conversation. Frontiers in Human Neuroscience, 6, 185. doi:10.3389/fnhum.2012.00185.
Abstract
The interactive-alignment account of dialogue proposes that interlocutors achieve conversational success by aligning their understanding of the situation under discussion. Such alignment occurs because they prime each other at different levels of representation (e.g., phonology, syntax, semantics), and this is possible because these representations are shared across production and comprehension. In this paper, we briefly review the behavioral evidence, and then consider how findings from cognitive neuroscience might lend support to this account, on the assumption that alignment of neural activity corresponds to alignment of mental states. We first review work supporting representational parity between production and comprehension, and suggest that neural activity associated with phonological, lexical, and syntactic aspects of production and comprehension are closely related. We next consider evidence for the neural bases of the activation and use of situation models during production and comprehension, and how these demonstrate the activation of non-linguistic conceptual representations associated with language use. We then review evidence for alignment of neural mechanisms that are specific to the act of communication. Finally, we suggest some avenues of further research that need to be explored to test crucial predictions of the interactive alignment account. -
Merolla, D., & Ameka, F. K. (2010). Hogbetsotso: Celebration and songs of the Ewe migration story. Interview with Dr. Datey-Kumodzie. Verba Africana series - Video documentation and Digital Materials, 4.
-
Merolla, D., & Ameka, F. K. (2012). Reflections on video fieldwork: The making of Verba Africana IV on the Ewe Hogbetsotso Festival. In D. Merolla, J. Jansen, & K. Nait-Zerrad (
Eds. ), Multimedia research and documentation of oral genres in Africa - The step forward (pp. 123-132). Münster: Lit. -
Merritt, D. J., Casasanto, D., & Brannon, E. M. (2010). Do monkeys think in metaphors? Representations of space and time in monkeys and humans. Cognition, 117, 191-202. doi:10.1016/j.cognition.2010.08.011.
Abstract
Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of space asymmetrically. Previous findings in humans have supported metaphor theory. Here, we investigate the relationship between time and space in a nonverbal species, by testing whether non-human primates show space–time interactions consistent with metaphor theory or with ATOM. We tested two rhesus monkeys and 16 adult humans in a nonverbal task that assessed the influence of an irrelevant dimension (time or space) on a relevant dimension (space or time). In humans, spatial extent had a large effect on time judgments whereas time had a small effect on spatial judgments. In monkeys, both spatial and temporal manipulations showed large bi-directional effects on judgments. In contrast to humans, spatial manipulations in monkeys did not produce a larger effect on temporal judgments than the reverse. Thus, consistent with previous findings, human adults showed asymmetrical space–time interactions that were predicted by metaphor theory. In contrast, monkeys showed patterns that were more consistent with ATOM. -
Meulenbroek, O., Kessels, R. P. C., De Rover, M., Petersson, K. M., Olde Rikkert, M. G. M., Rijpkema, M., & Fernández, G. (2010). Age-effects on associative object-location memory. Brain Research, 1315, 100-110. doi:10.1016/j.brainres.2009.12.011.
Abstract
Aging is accompanied by an impairment of associative memory. The medial temporal lobe and fronto-striatal network, both involved in associative memory, are known to decline functionally and structurally with age, leading to the so-called associative binding deficit and the resource deficit. Because the MTL and fronto-striatal network interact, they might also be able to support each other. We therefore employed an episodic memory task probing memory for sequences of object–location associations, where the demand on self-initiated processing was manipulated during encoding: either all the objects were visible simultaneously (rich environmental support) or every object became visible transiently (poor environmental support). Following the concept of resource deficit, we hypothesised that the elderly probably have difficulty using their declarative memory system when demands on self-initiated processing are high (poor environmental support). Our behavioural study showed that only the young use the rich environmental support in a systematic way, by placing the objects next to each other. With the task adapted for fMRI, we found that elderly showed stronger activity than young subjects during retrieval of environmentally richly encoded information in the basal ganglia, thalamus, left middle temporal/fusiform gyrus and right medial temporal lobe (MTL). These results indicate that rich environmental support leads to recruitment of the declarative memory system in addition to the fronto-striatal network in elderly, while the young use more posterior brain regions likely related to imagery. We propose that elderly try to solve the task by additional recruitment of stimulus-response associations, which might partly compensate their limited attentional resources. -
Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.
Abstract
Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast. -
Minagawa-Kawai, Y., Cristià, A., & Dupoux, E. (2012). Erratum to “Cerebral lateralization and early speech acquisition: A developmental scenario” [Dev. Cogn. Neurosci. 1 (2011) 217–232]. Developmental Cognitive Neuroscience, 2(1), 194-195. doi:10.1016/j.dcn.2011.07.011.
Abstract
Refers to Yasuyo Minagawa-Kawai, Alejandrina Cristià, Emmanuel Dupoux "Cerebral lateralization and early speech acquisition: A developmental scenario" Developmental Cognitive Neuroscience, Volume 1, Issue 3, July 2011, Pages 217-232 -
Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.
Abstract
We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates. -
Mitterer, H., & Jesse, A. (2010). Correlation versus causation in multisensory perception. Psychonomic Bulletin & Review, 17, 329-334. doi:10.3758/PBR.17.3.329.
Abstract
Events are often perceived in multiple modalities. The co-occurring proximal visual and auditory stimuli events are mostly also causally linked to the distal event. This makes it difficult to evaluate whether learned correlation or perceived causation guides binding in multisensory perception. Piano tones are an interesting exception: Piano tones are associated with seeing key strokes but are directly caused by hammers that hit strings hidden from observation. We examined the influence of seeing the hammer or the key stroke on auditory temporal order judgments (TOJ). Participants judged the temporal order of a dog bark and a piano tone, while seeing the piano stroke shifted temporally relative to its audio signal. Visual lead increased "piano-first" responses in auditory TOJ, but more so if only the associated key stroke than if the sound-producing hammer was visible, though both were equally visually salient. This provides evidence for a learning account of audiovisual perception. -
Mitterer, H. (
Ed. ). (2012). Ecological aspects of speech perception [Research topic] [Special Issue]. Frontiers in Cognition.Abstract
Our knowledge of speech perception is largely based on experiments conducted with carefully recorded clear speech presented under good listening conditions to undistracted listeners - a near-ideal situation, in other words. But the reality poses a set of different challenges. First of all, listeners may need to divide their attention between speech comprehension and another task (e.g., driving). Outside the laboratory, the speech signal is often slurred by less than careful pronunciation and the listener has to deal with background noise. Moreover, in a globalized world, listeners need to understand speech in more than their native language. Relatedly, the speakers we listen to often have a different language background so we have to deal with a foreign or regional accent we are not familiar with. Finally, outside the laboratory, speech perception is not an end in itself, but rather a mean to contribute to a conversation. Listeners do not only need to understand the speech they are hearing, they also need to use this information to plan and time their own responses. For this special topic, we invite papers that address any of these ecological aspects of speech perception. -
Mitterer, H., & Tuinman, A. (2012). The role of native-language knowledge in the perception of casual speech in a second language. Frontiers in Psychology, 3, 249. doi:10.3389/fpsyg.2012.00249.
Abstract
Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word-recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual-speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners' performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner's native language, similar to what has been observed for phoneme contrasts. -
Moisik, S. R., Esling, J. H., & Crevier-Buchman, L. (2010). A high-speed laryngoscopic investigation of aryepiglottic trilling. The Journal of the Acoustical Society of America, 127(3), 1548-1558. doi:10.1121/1.3299203.
Abstract
Six aryepiglottic trills with varied laryngeal parameters were recorded using high-speed laryngoscopy to investigate the nature of the oscillatory behavior of the upper margin of the epilaryngeal tube. Image analysis techniques were applied to extract data about the patterns of aryepiglottic fold oscillation, with a focus on the oscillatory frequencies of the folds. The acoustic impact of aryepiglottic trilling is also considered, along with possible interactions between the aryepiglottic vibration and vocal fold vibration during the voiced trill. Overall, aryepiglottic trilling is deemed to be correctly labeled as a trill in phonetic terms, while also acting as a means to alter the quality of voicing to be auditorily harsh. In terms of its characterization, aryepiglottic vibration is considerably irregular, but it shows indications of contributing quasi-harmonic excitation of the vocal tract, particularly noticeable under conditions of glottal voicelessness. Aryepiglottic vibrations appear to be largely independent of glottal vibration in terms of oscillatory frequency but can be increased in frequency by increasing overall laryngeal constriction. There is evidence that aryepiglottic vibration induces an alternating vocal fold vibration pattern. It is concluded that aryepiglottic trilling, like ventricular phonation, should be regarded as a complex, if highly irregular, sound source. -
Moseley, R., Carota, F., Hauk, O., Mohr, B., & Pulvermüller, F. (2012). A role for the motor system in binding abstract emotional meaning. Cerebral Cortex, 22(7), 1634-1647. doi:10.1093/cercor/bhr238.
Abstract
Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states. -
Muglia, P., Tozzi, F., Galwey, N. W., Francks, C., Upmanyu, R., Kong, X., Antoniades, A., Domenici, E., Perry, J., Rothen, S., Vandeleur, C. L., Mooser, V., Waeber, G., Vollenweider, P., Preisig, M., Lucae, S., Muller-Myhsok, B., Holsboer, F., Middleton, L. T., & Roses, A. D. (2010). Genome-wide association study of recurrent major depressive disorder in two European case-control cohorts. Molecular Psychiatry, 15(6), 589-601. doi:10.1038/mp.2008.131.
Abstract
Major depressive disorder (MDD) is a highly prevalent disorder with substantial heritability. Heritability has been shown to be substantial and higher in the variant of MDD characterized by recurrent episodes of depression. Genetic studies have thus far failed to identify clear and consistent evidence of genetic risk factors for MDD. We conducted a genome-wide association study (GWAS) in two independent datasets. The first GWAS was performed on 1022 recurrent MDD patients and 1000 controls genotyped on the Illumina 550 platform. The second was conducted on 492 recurrent MDD patients and 1052 controls selected from a population-based collection, genotyped on the Affymetrix 5.0 platform. Neither GWAS identified any SNP that achieved GWAS significance. We obtained imputed genotypes at the Illumina loci for the individuals genotyped on the Affymetrix platform, and performed a meta-analysis of the two GWASs for this common set of approximately half a million SNPs. The meta-analysis did not yield genome-wide significant results either. The results from our study suggest that SNPs with substantial odds ratio are unlikely to exist for MDD, at least in our datasets and among the relatively common SNPs genotyped or tagged by the half-million-loci arrays. Meta-analysis of larger datasets is warranted to identify SNPs with smaller effects or with rarer allele frequencies that contribute to the risk of MDD.Additional information
http://www.nature.com/mp/journal/v15/n6/suppinfo/mp2008131s1.html?url=/mp/journ… -
Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the Workshop (pp. 122-130). Stroudsburg, PA: Association for Computational Linguistics.
-
Namjoshi, J., Tremblay, A., Broersma, M., Kim, S., & Cho, T. (2012). Influence of recent linguistic exposure on the segmentation of an unfamiliar language [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 1968.
Abstract
Studies have shown that listeners segmenting unfamiliar languages transfer native-language (L1) segmentation cues. These studies, however, conflated L1 and recent linguistic exposure. The present study investigates the relative influences of L1 and recent linguistic exposure on the use of prosodic cues for segmenting an artificial language (AL). Participants were L1-French listeners, high-proficiency L2-French L1-English listeners, and L1-English listeners without functional knowledge of French. The prosodic cue assessed was F0 rise, which is word-final in French, but in English tends to be word-initial. 30 participants heard a 20-minute AL speech stream with word-final boundaries marked by F0 rise, and decided in a subsequent listening task which of two words (without word-final F0 rise) had been heard in the speech stream. The analyses revealed a marginally significant effect of L1 (all listeners) and, importantly, a significant effect of recent linguistic exposure (L1-French and L2-French listeners): accuracy increased with decreasing time in the US since the listeners’ last significant (3+ months) stay in a French-speaking environment. Interestingly, no effect of L2 proficiency was found (L2-French listeners). -
Narasimhan, B., Kopecka, A., Bowerman, M., Gullberg, M., & Majid, A. (2012). Putting and taking events: A crosslinguistic perspective. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 1-18). Amsterdam: Benjamins.Files private
Request files -
Narasimhan, B. (2012). Putting and Taking in Tamil and Hindi. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 201-230). Amsterdam: Benjamins.Abstract
Many languages have general or “light” verbs used by speakers to describe a wide range of situations owing to their relatively schematic meanings, e.g., the English verb do that can be used to describe many different kinds of actions, or the verb put that labels a range of types of placement of objects at locations. Such semantically bleached verbs often become grammaticalized and used to encode an extended (set of) meaning(s), e.g., Tamil veyyii ‘put/place’ is used to encode causative meaning in periphrastic causatives (e.g., okkara veyyii ‘make sit’, nikka veyyii ‘make stand’). But do general verbs in different languages have the same kinds of (schematic) meanings and extensional ranges? Or do they reveal different, perhaps even cross-cutting, ways of structuring the same semantic domain in different languages? These questions require detailed crosslinguistic investigation using comparable methods of eliciting data. The present study is a first step in this direction, and focuses on the use of general verbs to describe events of placement and removal in two South Asian languages, Hindi and Tamil. -
Newbury, D. F., Fisher, S. E., & Monaco, A. P. (2010). Recent advances in the genetics of language impairment. Genome Medicine, 2, 6. doi:10.1186/gm127.
Abstract
Specific language impairment (SLI) is defined as an unexpected and persistent impairment in language ability despite adequate opportunity and intelligence and in the absence of any explanatory medical conditions. This condition is highly heritable and affects between 5% and 8% of pre-school children. Over the past few years, investigations have begun to uncover genetic factors that may contribute to susceptibility to language impairment. So far, variants in four specific genes have been associated with spoken language disorders - forkhead box P2 (FOXP2) and contactin-associated protein-like 2 (CNTNAP2) on chromosome7 and calcium-transporting ATPase 2C2 (ATP2C2) and c-MAF inducing protein (CMIP) on chromosome 16. Here, we describe the different ways in which these genes were identified as candidates for language impairment. We discuss how characterization of these genes, and the pathways in which they are involved, may enhance our understanding of language disorders and improve our understanding of the biological foundations of language acquisition. -
Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2012). Brain regions that process case: Evidence from basque. Human Brain Mapping, 33(11), 2509-2520. doi:10.1002/hbm.21377.
Abstract
The aim of this event-related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well-understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions-of-interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc. -
Nieuwland, M. S. (2012). Establishing propositional truth-value in counterfactual and real-world contexts during sentence comprehension: Differential sensitivity of the left and right inferior frontal gyri. NeuroImage, 59(4), 3433-3440. doi:10.1016/j.neuroimage.2011.11.018.
Abstract
What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences (“If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America”) and real-world sentences (“Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia”) that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension. -
Nieuwland, M. S., & Martin, A. E. (2012). If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension. Cognition, 122(1), 102-109. doi:10.1016/j.cognition.2011.09.001.
Abstract
Propositional truth-value can be a defining feature of a sentence’s relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., “If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America”) and real-world sentences (e.g., “Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia”) alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual. -
Nieuwland, M. S., Ditman, T., & Kuperberg, G. R. (2010). On the incrementality of pragmatic processing: An ERP investigation of informativeness and pragmatic abilities. Journal of Memory and Language, 63(3), 324-346. doi:10.1016/j.jml.2010.06.005.
Abstract
In two event-related potential (ERP) experiments, we determined to what extent Grice’s maxim of informativeness as well as pragmatic ability contributes to the incremental build-up of sentence meaning, by examining the impact of underinformative versus informative scalar statements (e.g. “Some people have lungs/pets, and…”) on the N400 event-related potential (ERP), an electrophysiological index of semantic processing. In Experiment 1, only pragmatically skilled participants (as indexed by the Autism Quotient Communication subscale) showed a larger N400 to underinformative statements. In Experiment 2, this effect disappeared when the critical words were unfocused so that the local underinformativeness went unnoticed (e.g., “Some people have lungs that…”). Our results suggest that, while pragmatic scalar meaning can incrementally contribute to sentence comprehension, this contribution is dependent on contextual factors, whether these are derived from individual pragmatic abilities or the overall experimental context. -
Nitschke, S., Kidd, E., & Serratrice, L. (2010). First language transfer and long-term structural priming in comprehension. Language and Cognitive Processes, 25(1), 94-114. doi:10.1080/01690960902872793.
Abstract
The present study investigated L1 transfer effects in L2 sentence processing and syntactic priming through comprehension in speakers of German and Italian. L1 and L2 speakers of both languages participated in a syntactic priming experiment that aimed to shift their preferred interpretation of ambiguous relative clause constructions. The results suggested that L1 transfer affects L2 processing but not the strength of structural priming, and therefore does not hinder the acquisition of L2 parsing strategies. We also report evidence that structural priming through comprehension can persist in L1 and L2 speakers over an experimental phase without further exposure to primes. Finally, we observed that priming can occur for what are essentially novel form-meaning pairings for L2 learners, suggesting that adult learners can rapidly associate existing forms with new meanings. -
Noble, J., De Ruiter, J. P., & Arnold, K. (2010). From monkey alarm calls to human language: How simulations can fill the gap. Adaptive Behavior, 18, 66-82. doi:10.1177/1059712309350974.
Abstract
Observations of alarm calling behavior in putty-nosed monkeys are suggestive of a link with human language evolution. However, as is often the case in studies of animal behavior and cognition, competing theories are underdetermined by the available data. We argue that computational modeling, and in particular the use of individual-based simulations, is an effective way to reduce the size of the pool of candidate explanations. Simulation achieves this both through the classification of evolutionary trajectories as either plausible or implausible, and by putting lower bounds on the cognitive complexity required to perform particular behaviors. A case is made for using both of these strategies to understand the extent to which the alarm calls of putty-nosed monkeys are likely to be a good model for human language evolution. -
Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Allophonic mode of speech perception in Dutch children at risk for dyslexia: A longitudinal study. Research in developmental disabilities, 33, 1469-1483. doi:10.1016/j.ridd.2012.03.021.
Abstract
There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems. -
Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Neural evidence of allophonic perception in children at risk for dyslexia. Neuropsychologia, 50, 2010-2017. doi:10.1016/j.neuropsychologia.2012.04.026.
Abstract
Learning to read is a complex process that develops normally in the majority of children and requires the mapping of graphemes to their corresponding phonemes. Problems with the mapping process nevertheless occur in about 5% of the population and are typically attributed to poor phonological representations, which are — in turn — attributed to underlying speech processing difficulties. We examined auditory discrimination of speech sounds in 6-year-old beginning readers with a familial risk of dyslexia (n=31) and no such risk (n=30) using the mismatch negativity (MMN). MMNs were recorded for stimuli belonging to either the same phoneme category (acoustic variants of/bə/) or different phoneme categories (/bə/vs./də/). Stimuli from different phoneme categories elicited MMNs in both the control and at-risk children, but the MMN amplitude was clearly lower in the at-risk children. In contrast, the stimuli from the same phoneme category elicited an MMN in only the children at risk for dyslexia. These results show children at risk for dyslexia to be sensitive to acoustic properties that are irrelevant in their language. Our findings thus suggest a possible cause of dyslexia in that they show 6-year-old beginning readers with at least one parent diagnosed with dyslexia to have a neural sensitivity to speech contrasts that are irrelevant in the ambient language. This sensitivity clearly hampers the development of stable phonological representations and thus leads to significant reading impairment later in life. -
Noordzij, M. L., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Neural correlates of intentional communication. Frontiers in Neuroscience, 4, E188. doi:10.3389/fnins.2010.00188.
Abstract
We know a great deal about the neurophysiological mechanisms supporting instrumental actions, i.e. actions designed to alter the physical state of the environment. In contrast, little is known about our ability to select communicative actions, i.e. actions directly designed to modify the mental state of another agent. We have recently provided novel empirical evidence for a mechanism in which a communicator selects his actions on the basis of a prediction of the communicative intentions that an addressee is most likely to attribute to those actions. The main novelty of those finding was that this prediction of intention recognition is cerebrally implemented within the intention recognition system of the communicator, is modulated by the ambiguity in meaning of the communicative acts, and not by their sensorimotor complexity. The characteristics of this predictive mechanism support the notion that human communicative abilities are distinct from both sensorimotor and linguistic processes. -
Nora, A., Hultén, A., Karvonen, L., Kim, J.-Y., Lehtonen, M., Yli-Kaitala, H., Service, E., & Salmelin, R. (2012). Long-term phonological learning begins at the level of word form. NeuroImage, 63, 789-799. doi:10.1016/j.neuroimage.2012.07.026.
Abstract
Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300–600 ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600–1200 ms on both days. The results point to item-level learning of novel whole-word representations. -
Norcliffe, E., & Enfield, N. J. (
Eds. ). (2010). Field Manual Volume 13. Nijmegen: Max Planck Institute for Psycholinguistics. -
Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2010). The grammar of perception. In E. Norcliffe, & N. J. Enfield (
Eds. ), Field manual volume 13 (pp. 7-16). Nijmegen: Max Planck Institute for Psycholinguistics. -
Nordhoff, S., & Hammarström, H. (2012). Glottolog/Langdoc: Increasing the visibility of grey literature for low-density languages. In N. Calzolari (
Ed. ), Proceedings of the 8th International Conference on Language Resources and Evaluation [LREC 2012], May 23-25, 2012 (pp. 3289-3294). [Paris]: ELRA.Abstract
Language resources can be divided into structural resources treating phonology, morphosyntax, semantics etc. and resources treating the social, demographic, ethnic, political context. A third type are meta-resources, like bibliographies, which provide access to the resources of the first two kinds. This poster will present the Glottolog/Langdoc project, a comprehensive bibliography providing web access to 180k bibliographical records to (mainly) low visibility resources from low-density languages. The resources are annotated for macro-area, content language, and document type and are available in XHTML and RDF. -
Nouaouri, N. (2012). The semantics of placement and removal predicates in Moroccan Arabic. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 99-122). Amsterdam: Benjamins.Abstract
This article explores the expression of placement and removal events in Moroccan Arabic, particularly the semantic features of ‘putting’ and ‘taking’ verbs, classified in accordance with their combination with Goal and/or Source NPs. Moroccan Arabic verbs encode a variety of components of placement and removal events, including containment, attachment, features of the figure, and trajectory. Furthermore, accidental events are distinguished from deliberate events either by the inherent semantics of predicates or denoted syntactically. The postures of the Figures, in spite of some predicates distinguishing them, are typically not specified as they are in other languages, such as Dutch. Although Ground locations are frequently mentioned in both source-oriented and goal-oriented clauses, they are used more often in goal-oriented clauses. -
O’Connor, L. (2012). Take it up, down, and away: Encoding placement and removal in Lowland Chontal. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 297-326). Amsterdam: Benjamins.Abstract
This paper offers a structural and semantic analysis of expressions of caused motion in Lowland Chontal of Oaxaca, an indigenous language of southern Mexico. The data were collected using a video stimulus designed to elicit a wide range of caused motion event descriptions. The most frequent event types in the corpus depict caused motion to and from relations of support and containment, fundamental notions in the description of spatial relations between two entities and critical semantic components of the linguistic encoding of caused motion in this language. Formal features of verbal construction type and argument realization are examined by sorting event descriptions into semantic types of placement and removal, to and from support and to and from containment. Together with typological factors that shape the distribution of spatial semantics and referent expression, separate treatments of support and containment relations serve to clarify notable asymmetries in patterns of predicate type and argument realization. -
Oliver, G., Gullberg, M., Hellwig, F., Mitterer, H., & Indefrey, P. (2012). Acquiring L2 sentence comprehension: A longitudinal study of word monitoring in noise. Bilingualism: Language and Cognition, 15, 841 -857. doi:10.1017/S1366728912000089.
Abstract
This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not lexical-semantic, transfer from the L1 to the L2 from the onset of L2 learning. An initial stronger adverse effect of noise on syntactic compared to phonological processing disappeared after two weeks of learning Dutch suggesting a change towards more robust syntactic processing. At the same time the L2 learners started to exploit semantic constraints predicting upcoming target words. The use of semantic predictability remained less efficient compared to native speakers until the end of the observation period. The improvement and the persistent problems in semantic processing we found were independent of noise and rather seem to reflect the need for more context information to build up online semantic representations in L2 listening. -
Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.
Abstract
Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words. -
Ortega, G., & Morgan, G. (2010). Comparing child and adult development of a visual phonological system. Language interaction and acquisition, 1(1), 67-81. doi:10.1075/lia.1.1.05ort.
Abstract
Research has documented systematic articulation differences in young children’s first signs compared with the adult input. Explanations range from the implementation of phonological processes, cognitive limitations and motor immaturity. One way of disentangling these possible explanations is to investigate signing articulation in adults who do not know any sign language but have mature cognitive and motor development. Some preliminary observations are provided on signing accuracy in a group of adults using a sign repetition methodology. Adults make the most errors with marked handshapes and produce movement and location errors akin to those reported for child signers. Secondly, there are both positive and negative influences of sign iconicity on sign repetition in adults. Possible reasons are discussed for these iconicity effects based on gesture. -
Ortega, G. (2010). MSJE TXT: Un evento social. Lectura y vida: Revista latinoamericana de lectura, 4, 44-53.
-
Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).
Abstract
Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one. -
Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (
Eds. ), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.Abstract
Gestures are meaningful movements of the body, the hands, and the face during communication,
which accompany the production of both spoken and signed utterances. Recent
research has shown that gestures are an integral part of language and that they contribute
semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore,
they reveal internal representations of the language user during communication in ways
that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes
research on the role of gesture in spoken languages. Subsequently, it gives an overview
of how gestural components might manifest themselves in sign languages, that is,
in a situation in which both gesture and sign are expressed by the same articulators.
Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and
spoken language contributes to our knowledge of the human language faculty as a multimodal communication system. -
Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.
Abstract
Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
might arise in both modalities into account before it can generalize over the human language faculty. -
Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (
Eds. ), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer. -
Paternoster, L., Zhurov, A., Toma, A., Kemp, J., St Pourcain, B., Timpson, N., McMahon, G., McArdle, W., Ring, S., Smith, G., Richmond, S., & Evans, D. (2012). Genome-wide Association Study of Three-Dimensional Facial Morphology Identifies a Variant in PAX3 Associated with Nasion Position. The American Journal of Human Genetics, 90(3), 478-485. doi:10.1016/j.ajhg.2011.12.021.
Abstract
Craniofacial morphology is highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. We aimed to identify genetic variants associated with normal facial variation in a population-based cohort of 15-year-olds from the Avon Longitudinal Study of Parents and Children. 3D high-resolution images were obtained with two laser scanners, these were merged and aligned, and 22 landmarks were identified and their x, y, and z coordinates used to generate 54 3D distances reflecting facial features. 14 principal components (PCs) were also generated from the landmark locations. We carried out genome-wide association analyses of these distances and PCs in 2,185 adolescents and attempted to replicate any significant associations in a further 1,622 participants. In the discovery analysis no associations were observed with the PCs, but we identified four associations with the distances, and one of these, the association between rs7559271 in PAX3 and the nasion to midendocanthion distance (n-men), was replicated (p = 4 × 10−7). In a combined analysis, each G allele of rs7559271 was associated with an increase in n-men distance of 0.39 mm (p = 4 × 10−16), explaining 1.3% of the variance. Independent associations were observed in both the z (nasion prominence) and y (nasion height) dimensions (p = 9 × 10−9 and p = 9 × 10−10, respectively), suggesting that the locus primarily influences growth in the yz plane. Rare variants in PAX3 are known to cause Waardenburg syndrome, which involves deafness, pigmentary abnormalities, and facial characteristics including a broad nasal bridge. Our findings show that common variants within this gene also influence normal craniofacial development.Additional information
http://www.sciencedirect.com/science/article/pii/S000292971200002X#appd002 -
Peeters, D., Vanlangendonck, F., & Willems, R. M. (2012). Bestaat er een talenknobbel? Over taal in ons brein. In M. Boogaard, & M. Jansen (
Eds. ), Alles wat je altijd al had willen weten over taal: De taalcanon (pp. 41-43). Amsterdam: Meulenhoff.Abstract
Wanneer iemand goed is in het spreken van meerdere talen, wordt wel gezegd dat zo iemand een talenknobbel heeft. Iedereen weet dat dat niet letterlijk bedoeld is: iemand met een talenknobbel herkennen we niet aan een grote bult op zijn hoofd. Toch dacht men vroeger wel degelijk dat mensen een letterlijke talenknobbel konden ontwikkelen. Een goed ontwikkeld taalvermogen zou gepaard gaan met het groeien van het hersengebied dat hiervoor verantwoordelijk was. Dit deel van het brein zou zelfs zo groot kunnen worden dat het van binnenuit tegen de schedel drukte, met name rond de ogen. Nu weten we wel beter. Maar waar in het brein bevindt de taal zich dan wel precies? -
Perniss, P. M., Thompson, R. L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages [Review article]. Frontiers in Psychology, 1, E227. doi:10.3389/fpsyg.2010.00227.
Abstract
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor and perceptual experience. -
Perniss, P. M., Vinson, D., Seifart, F., & Vigliocco, G. (2012). Speaking of shape: The effects of language-specific encoding on semantic representations. Language and Cognition, 4, 223-242. doi:10.1515/langcog-2012-0012.
Abstract
The question of whether different linguistic patterns differentially influence semantic and conceptual representations is of central interest in cognitive science. In this paper, we investigate whether the regular encoding of shape within a nominal classification system leads to an increased salience of shape in speakers' semantic representations by comparing English, (Amazonian) Spanish, and Bora, a shape-based classifier language spoken in the Amazonian regions of Columbia and Peru. Crucially, in displaying obligatory use, pervasiveness in grammar, high discourse frequency, and phonological variability of forms corresponding to particular shape features, the Bora classifier system differs in important ways from those in previous studies investigating effects of nominal classification, thereby allowing better control of factors that may have influenced previous findings. In addition, the inclusion of Spanish monolinguals living in the Bora village allowed control for the possibility that differences found between English and Bora speakers may be attributed to their very different living environments. We found that shape is more salient in the semantic representation of objects for speakers of Bora, which systematically encodes shape, than for speakers of English and Spanish, which do not. Our results are consistent with assumptions that semantic representations are shaped and modulated by our specific linguistic experiences. -
Perniss, P. M. (2012). Use of sign space. In R. Pfau, M. Steinbach, & B. Woll (
Eds. ), Sign Language: an International Handbook (pp. 412-431). Berlin: Mouton de Gruyter.Abstract
This chapter focuses on the semantic and pragmatic uses of space. The questions addressed concern how sign space (i.e. the area of space in front of the signer’s body) is used for meaning construction, how locations in sign space are associated with discourse referents, and how signers choose to structure sign space for their communicative intents. The chapter gives an overview of linguistic analyses of the use of space, starting with the distinction between syntactic and topographic uses of space and the different types of signs that function to establish referent-location associations, and moving to analyses based on mental spaces and conceptual blending theories. Semantic-pragmatic conventions for organizing sign space are discussed, as well as spatial devices notable in the visual-spatial modality (particularly, classifier predicates and signing perspective), which influence and determine the way meaning is created in sign space. Finally, the special role of simultaneity in sign languages is discussed, focusing on the semantic and discourse-pragmatic functions of simultaneous constructions. -
Petersen, J. H. (2012). How to put and take in Kalasha. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 349-366). Amsterdam: Benjamins.Abstract
In Kalasha, an Indo-Aryan language spoken in Northwest Pakistan, the linguistic encoding of ‘put’ and ‘take’ events reveals a symmetry between lexical ‘put’ and ‘take’ verbs that implies ‘placement on’ and ‘removal from’ a supporting surface. As regards ‘placement in’ and ‘removal from’ an enclosure, the data reveal a lexical asymmetry as ‘take’ verbs display a larger degree of linguistic elaboration of the Figure-Ground relation and the type of caused motion than ‘put’ verbs. When considering syntactic patterns, more instances of asymmetry between these two event types show up. The analysis presented here supports the proposal that an asymmetry exists in the encoding of goals versus sources as suggested in Nam (2004) and Ikegami (1987), but it calls into question the statement put forward by Regier and Zheng (2007) that endpoints (goals) are more finely differentiated semantically than starting points (sources). -
Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.
Abstract
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. -
Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.
Abstract
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems. -
Petrich, P., Piedrasanta, R., Figuerola, H., & Le Guen, O. (2010). Variantes y variaciones en la percepción de los antepasados entre los Mayas. In A. Monod Becquelin, A. Breton, & M. H. Ruz (
Eds. ), Figuras Mayas de la diversidad (pp. 255-275). Mérida, Mexico: Universidad autónoma de México. -
Petrovic, P., Kalso, E., Petersson, K. M., Andersson, J., Fransson, P., & Ingvar, M. (2010). A prefrontal non-opioid mechanism in placebo analgesia. Pain, 150, 59-65. doi:10.1016/j.pain.2010.03.011.
Abstract
ehavioral studies have suggested that placebo analgesia is partly mediated by the endogenous opioid system. Expanding on these results we have shown that the opioid-receptor-rich rostral anterior cingulate cortex (rACC) is activated in both placebo and opioid analgesia. However, there are also differences between the two treatments. While opioids have direct pharmacological effects, acting on the descending pain inhibitory system, placebo analgesia depends on neocortical top-down mechanisms. An important difference may be that expectations are met to a lesser extent in placebo treatment as compared with a specific treatment, yielding a larger error signal. As these processes previously have been shown to influence other types of perceptual experiences, we hypothesized that they also may drive placebo analgesia. Imaging studies suggest that lateral orbitofrontal cortex (lObfc) and ventrolateral prefrontal cortex (vlPFC) are involved in processing expectation and error signals. We re-analyzed two independent functional imaging experiments related to placebo analgesia and emotional placebo to probe for a differential processing in these regions during placebo treatment vs. opioid treatment and to test if this activity is associated with the placebo response. In the first dataset lObfc and vlPFC showed an enhanced activation in placebo analgesia vs. opioid analgesia. Furthermore, the rACC activity co-varied with the prefrontal regions in the placebo condition specifically. A similar correlation between rACC and vlPFC was reproduced in another dataset involving emotional placebo and correlated with the degree of the placebo effect. Our results thus support that placebo is different from specific treatment with a prefrontal top-down influence on rACC. -
Pettenati, P., Sekine, K., Congestrì, E., & Volterra, V. (2012). A comparative study on representational gestures in Italian and Japanese children. Journal of Nonverbal Behavior, 36(2), 149-164. doi:10.1007/s10919-011-0127-0.
Abstract
This study compares words and gestures produced in a controlled experimental setting by children raised in different linguistic/cultural environments to examine the robustness of gesture use at an early stage of lexical development. Twenty-two Italian and twenty-two Japanese toddlers (age range 25–37 months) performed the same picture-naming task. Italians produced more spoken correct labels than Japanese but a similar amount of representational gestures temporally matched with words. However, Japanese gestures reproduced more closely the action represented in the picture. Results confirm that gestures are linked to motor actions similarly for all children, suggesting a common developmental stage, only minimally influenced by culture. -
Piai, V., Roelofs, A., & Schriefers, H. (2012). Distractor strength and selective attention in picture-naming performance. Memory and cognition, 40, 614-627. doi:10.3758/s13421-011-0171-3.
Abstract
Whereas it has long been assumed that competition plays a role in lexical selection in word production (e.g., Levelt, Roelofs, & Meyer, 1999), recently Finkbeiner and Caramazza (2006) argued against the competition assumption on the basis of their observation that visible distractors yield semantic interference in picture naming, whereas masked distractors yield semantic facilitation. We examined an alternative account of these findings that preserves the competition assumption. According to this account, the interference and facilitation effects of distractor words reflect whether or not distractors are strong enough to exceed a threshold for entering the competition process. We report two experiments in which distractor strength was manipulated by means of coactivation and visibility. Naming performance was assessed in terms of mean response time (RT) and RT distributions. In Experiment 1, with low coactivation, semantic facilitation was obtained from clearly visible distractors, whereas poorly visible distractors yielded no semantic effect. In Experiment 2, with high coactivation, semantic interference was obtained from both clearly and poorly visible distractors. These findings support the competition threshold account of the polarity of semantic effects in naming. -
Piai, V., Roelofs, A., & van der Meij, R. (2012). Event-related potentials and oscillatory brain responses associated with semantic and Stroop-like interference effects in overt naming. Brain Research, 1450, 87-101. doi:10.1016/j.brainres.2012.02.050.
Abstract
Picture–word interference is a widely employed paradigm to investigate lexical access in word production: Speakers name pictures while trying to ignore superimposed distractor words. The distractor can be congruent to the picture (pictured cat, word cat), categorically related (pictured cat, word dog), or unrelated (pictured cat, word pen). Categorically related distractors slow down picture naming relative to unrelated distractors, the so-called semantic interference. Categorically related distractors slow down picture naming relative to congruent distractors, analogous to findings in the colour–word Stroop task. The locus of semantic interference and Stroop-like effects in naming performance has recently become a topic of debate. Whereas some researchers argue for a pre-lexical locus of semantic interference and a lexical locus of Stroop-like effects, others localise both effects at the lexical selection stage. We investigated the time course of semantic and Stroop-like interference effects in overt picture naming by means of event-related potentials (ERP) and time–frequency analyses. Moreover, we employed cluster-based permutation for statistical analyses. Naming latencies showed semantic and Stroop-like interference effects. The ERP waveforms for congruent stimuli started diverging statistically from categorically related stimuli around 250 ms. Deflections for the categorically related condition were more negative-going than for the congruent condition (the Stroop-like effect). The time–frequency analysis revealed a power increase in the beta band (12–30 Hz) for categorically related relative to unrelated stimuli roughly between 250 and 370 ms (the semantic effect). The common time window of these effects suggests that both semantic interference and Stroop-like effects emerged during lexical selection. -
Pijnacker, J. (2010). Defeasible inference in autism: A behavioral and electrophysiological approach. PhD Thesis, Radboud University Nijmegen, Nijmegen.
-
Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism. Neuropsychologia, 48, 2940-2951. doi:10.1016/j.neuropsychologia.2010.06.003.
Abstract
Several studies have demonstrated that people with ASD and intact language skills still have problems processing linguistic information in context. Given this evidence for reduced sensitivity to linguistic context, the question arises how contextual information is actually processed by people with ASD. In this study, we used event-related brain potentials (ERPs) to examine context sensitivity in high-functioning adults with autistic disorder (HFA) and Asperger syndrome at two levels: at the level of sentence processing and at the level of solving reasoning problems. We found that sentence context as well as reasoning context had an immediate ERP effect in adults with Asperger syndrome, as in matched controls. Both groups showed a typical N400 effect and a late positive component for the sentence conditions, and a sustained negativity for the reasoning conditions. In contrast, the HFA group demonstrated neither an N400 effect nor a sustained negativity. However, the HFA group showed a late positive component which was larger for semantically anomalous sentences than congruent sentences. Because sentence context had a modulating effect in a later phase, semantic integration is perhaps less automatic in HFA, and presumably more elaborate processes are needed to arrive at a sentence interpretation. -
Pillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J. and 10 morePillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J., Ring, S. M., Deng, G., Zhang, W., McCarthy, M. I., Deloukas, P., Peltonen, L., Elliott, P., Coin, L. J. M., Smith, G. D., & Jarvelin, M.-R. (2010). Genome-wide association study reveals multiple loci associated with primary tooth development during infancy. PLoS Genetics, 6(2): e1000856. doi:10.1371/journal.pgen.1000856.
Abstract
Tooth development is a highly heritable process which relates to other growth and developmental processes, and which interacts with the development of the entire craniofacial complex. Abnormalities of tooth development are common, with tooth agenesis being the most common developmental anomaly in humans. We performed a genome-wide association study of time to first tooth eruption and number of teeth at one year in 4,564 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966) and 1,518 individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC). We identified 5 loci at P<}5x10(-8), and 5 with suggestive association (P{<5x10(-6)). The loci included several genes with links to tooth and other organ development (KCNJ2, EDA, HOXB2, RAD51L1, IGF2BP1, HMGA2, MSRB3). Genes at four of the identified loci are implicated in the development of cancer. A variant within the HOXB gene cluster associated with occlusion defects requiring orthodontic treatment by age 31 years.Additional information
http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000856#s5 -
Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (
Eds. ), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter. -
Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 2053.
Abstract
Two eye-tracking experiments tested whether native listeners can adapt
to reductions in casual Dutch speech. Listeners were exposed to segmental
([b] > [m]), syllabic (full-vowel-deletion), or no reductions. In a subsequent
test phase, all three listener groups were tested on how efficiently they could
recognize both types of reduced words. In the first Experiment’s exposure
phase, the (un)reduced target words were predictable. The segmental reductions
were completely consistent (i.e., involved the same input sequences).
Learning about them was found to be pattern-specific and generalized in the
test phase to new reduced /b/-words. The syllabic reductions were not consistent
(i.e., involved variable input sequences). Learning about them was
weak and not pattern-specific. Experiment 2 examined effects of word repetition
and predictability. The (un-)reduced test words appeared in the exposure
phase and were not predictable. There was no evidence of learning for
the segmental reductions, probably because they were not predictable during
exposure. But there was word-specific learning for the vowel-deleted words.
The results suggest that learning about reductions is pattern-specific and
generalizes to new words if the input is consistent and predictable. With
variable input, there is more likely to be adaptation to a general speaking
style and word-specific learning. -
Poletiek, F. H., & Lai, J. (2012). How semantic biases in simple adjacencies affect learning a complex structure with non-adjacencies in AGL: A statistical account. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2046 -2054. doi:10.1098/rstb.2012.0100.
Abstract
A major theoretical debate in language acquisition research regards the learnability of hierarchical structures. The artificial grammar learning methodology is increasingly influential in approaching this question. Studies using an artificial centre-embedded AnBn grammar without semantics draw conflicting conclusions. This study investigates the facilitating effect of distributional biases in simple AB adjacencies in the input sample—caused in natural languages, among others, by semantic biases—on learning a centre-embedded structure. A mathematical simulation of the linguistic input and the learning, comparing various distributional biases in AB pairs, suggests that strong distributional biases might help us to grasp the complex AnBn hierarchical structure in a later stage. This theoretical investigation might contribute to our understanding of how distributional features of the input—including those caused by semantic variation—help learning complex structures in natural languages. -
St Pourcain, B., Wang, K., Glessner, J. T., Golding, J., Steer, C., Ring, S. M., Skuse, D. H., Grant, S. F. A., Hakonarson, H., & Davey Smith, G. (2010). Association Between a High-Risk Autism Locus on 5p14 and Social Communication Spectrum Phenotypes in the General Population. American Journal of Psychiatry, 167(11), 1364-1372. doi:10.1176/appi.ajp.2010.09121789.
Abstract
Objective: Recent genome-wide analysis identified a genetic variant on 5p14.1 (rs4307059), which is associated with risk for autism spectrum disorder. This study investigated whether rs4307059 also operates as a quantitative trait locus underlying a broader autism phenotype in the general population, focusing specifically on the social communication aspect of the spectrum. Method: Study participants were 7,313 children from the Avon Longitudinal Study of Parents and Children. Single-trait and joint-trait genotype associations were investigated for 29 measures related to language and communication, verbal intelligence, social interaction, and behavioral adjustment, assessed between ages 3 and 12 years. Analyses were performed in one-sided or directed mode and adjusted for multiple testing, trait interrelatedness, and random genotype dropout. Results: Single phenotype analyses showed that an increased load of rs4307059 risk allele is associated with stereotyped conversation and lower pragmatic communication skills, as measured by the Children's Communication Checklist (at a mean age of 9.7 years). In addition a trend toward a higher frequency of identification of special educational needs (at a mean age of 11.8 years) was observed. Variation at rs4307059 was also associated with the phenotypic profile of studied traits. This joint signal was fully explained neither by single-trait associations nor by overall behavioral adjustment problems but suggested a combined effect, which manifested through multiple sub-threshold social, communicative, and cognitive impairments. Conclusions: Our results suggest that common variation at 5p14.1 is associated with social communication spectrum phenotypes in the general population and support the role of rs4307059 as a quantitative trait locus for autism spectrum disorder.Additional information
http://ajp.psychiatryonline.org/doi/suppl/10.1176/appi.ajp.2010.09121789 -
Puccini, D., & Liszkowski, U. (2012). 15-month-old infants fast map words but not representational gestures of multimodal labels. Frontiers in Psychology, 3: 101, pp. 101. doi:10.3389/fpsyg.2012.00101.
Abstract
This study investigated whether 15-month-old infants fast map multimodal labels, and, when given the choice of two modalities, whether they preferentially fast map one better than the other. Sixty 15-month-old infants watched films where an actress repeatedly and ostensively labeled two novel objects using a spoken word along with a representational gesture. In the test phase, infants were assigned to one of three conditions: Word, Word + Gesture, or Gesture. The objects appeared in a shelf next to the experimenter and, depending on the condition, infants were prompted with either a word, a gesture, or a multimodal word-gesture combination. Using an infant eye tracker, we determined whether infants made the correct mappings. Results revealed that only infants in the Word condition had learned the novel object labels. When the representational gesture was presented alone or when the verbal label was accompanied by a representational gesture, infants did not succeed in making the correct mappings. Results reveal that 15-month-old infants do not benefit from multimodal labeling and that they prefer words over representational gestures as object labels in multimodal utterances. Findings put into question the role of multimodal labeling in early language development. -
Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2010). The type of shared activity shapes caregiver and infant communication. Gesture, 10(2/3), 279-297. doi:10.1075/gest.10.2-3.08puc.
Abstract
For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language. -
Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2012). The type of shared activity shapes caregiver and infant communication [Reprint]. In J.-M. Colletta, & M. Guidetti (
Eds. ), Gesture and multimodal development (pp. 157-174). Amsterdam: John Benjamins.Abstract
For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language. -
Pyykkönen, P., & Järvikivi, J. (2012). Children and situation models of multiple events. Developmental Psychology, 48, 521-529. doi:10.1037/a0025526.
Abstract
The present study demonstrates that children experience difficulties reaching the correct situation model of multiple events described in temporal sentences if the sentences encode language-external events in reverse chronological order. Importantly, the timing of the cue of how to organize these events is crucial: When temporal subordinate conjunctions (before/after) or converb constructions that carry information of how to organize the events were given sentence-medially, children experienced severe difficulties in arriving at the correct interpretation of event order. When this information was provided sentence-initially, children were better able to arrive at the correct situation model, even if it required them to decode the linguistic information reversely with respect to the actual language external events. This indicates that children even aged 8–12 still experience difficulties in arriving at the correct interpretation of the event structure, if the cue of how to order the events is not given immediately when they start building the representation of the situation. This suggests that children's difficulties in comprehending sequential temporal events are caused by their inability to revise the representation of the current event structure at the level of the situation model -
Pyykkönen, P., & Järvikivi, J. (2010). Activation and persistence of implicit causality information in spoken language comprehension. Experimental Psychology, 57, 5-16. doi:10.1027/1618-3169/a000002.
Abstract
A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account (Greene & McKoon, 1995; Koornneef & Van Berkum, 2006; Van Berkum, Koornneef, Otten, & Nieuwland, 2007). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus. -
Pyykkönen, P., Matthews, D., & Järvikivi, J. (2010). Three-year-olds are sensitive to semantic prominence during online spoken language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25, 115 -129. doi:10.1080/01690960902944014.
Abstract
Recent evidence from adult pronoun comprehension suggests that semantic factors such as verb transitivity affect referent salience and thereby anaphora resolution. We tested whether the same semantic factors influence pronoun comprehension in young children. In a visual world study, 3-year-olds heard stories that began with a sentence containing either a high or a low transitivity verb. Looking behaviour to pictures depicting the subject and object of this sentence was recorded as children listened to a subsequent sentence containing a pronoun. Children showed a stronger preference to look to the subject as opposed to the object antecedent in the low transitivity condition. In addition there were general preferences (1) to look to the subject in both conditions and (2) to look more at both potential antecedents in the high transitivity condition. This suggests that children, like adults, are affected by semantic factors, specifically semantic prominence, when interpreting anaphoric pronouns. -
Rakoczy, H., & Haun, D. B. M. (2012). Vor- und nichtsprachliche Kognition. In W. Schneider, & U. Lindenberger (
Eds. ), Entwicklungspsychologie. 7. vollständig überarbeitete Auflage (pp. 337-362). Weinheim: Beltz Verlag. -
Rapold, C. J. (2010). Beneficiary and other roles of the dative in Tashelhiyt. In F. Zúñiga, & S. Kittilä (
Eds. ), Benefactives and malefactives: Typological perspectives and case studies (pp. 351-376). Amsterdam: Benjamins.Abstract
This paper explores the semantics of the dative in Tashelhiyt, a Berber language from Morocco. After a brief morphosyntactic overview of the dative in this language, I identify a wide range of its semantic roles, including possessor, experiencer, distributive and unintending causer. I arrange these roles in a semantic map and propose semantic links between the roles such as metaphorisation and generalisation. In the light of the Tashelhiyt data, the paper also proposes additions to previous semantic maps of the dative (Haspelmath 1999, 2003) and to Kittilä’s 2005 typology of beneficiary coding. -
Rapold, C. J. (2010). Defining converbs ten years on - A hitchhikers'guide. In S. Völlmin, A. Amha, C. J. Rapold, & S. Zaugg-Coretti (
Eds. ), Converbs, medial verbs, clause chaining and related issues (pp. 7-30). Köln: Rüdiger Köppe Verlag. -
Rapold, C. J. (2012). The encoding of placement and removal events in ǂAkhoe Haiǁom. In A. Kopecka, & B. Narasimhan (
Eds. ), Events of putting and taking: A crosslinguistic perspective (pp. 79-98). Amsterdam: Benjamins.Abstract
This paper explores the semantics of placement and removal verbs in Ākhoe Hai om based on event descriptions elicited with a set of video stimuli. After a brief sketch of the morphosyntax of placement/removal constructions in Ākhoe Haiom, four situation types are identified semantically that cover both placement and removal events. The language exhibits a clear tendency to make more fine-grained semantic distinctions in placement verbs, as opposed to semantically more general removal verbs. -
Ravignani, A., & Fitch, W. T. (2012). Sonification of experimental parameters as a new method for efficient coding of behavior. In A. Spink, F. Grieco, O. E. Krips, L. W. S. Loijens, L. P. P. J. Noldus, & P. H. Zimmerman (
Eds. ), Measuring Behavior 2012, 8th International Conference on Methods and Techniques in Behavioral Research (pp. 376-379).Abstract
Cognitive research is often focused on experimental condition-driven reactions. Ethological studies frequently
rely on the observation of naturally occurring specific behaviors. In both cases, subjects are filmed during the
study, so that afterwards behaviors can be coded on video. Coding should typically be blind to experimental
conditions, but often requires more information than that present on video. We introduce a method for blindcoding
of behavioral videos that takes care of both issues via three main innovations. First, of particular
significance for playback studies, it allows creation of a “soundtrack” of the study, that is, a track composed of
synthesized sounds representing different aspects of the experimental conditions, or other events, over time.
Second, it facilitates coding behavior using this audio track, together with the possibly muted original video.
This enables coding blindly to conditions as required, but not ignoring other relevant events. Third, our method
makes use of freely available, multi-platform software, including scripts we developed. -
Reddy, T. E., Gertz, J., Pauli, F., Kucera, K. S., Varley, K. E., Newberry, K. M., Marinov, G. K., Mortazavi, A., Williams, B. A., Song, L., Crawford, G. E., Wold, B., Willard, H. F., & Myers, R. M. (2012). Effects of sequence variation on differential allelic transcription factor occupancy and gene expression. Genome Research, 22, 860-869. doi:10.1101/gr.131201.111.
Abstract
A complex interplay between transcription factors (TFs) and the genome regulates transcription. However, connecting variation in genome sequence with variation in TF binding and gene expression is challenging due to environmental differences between individuals and cell types. To address this problem, we measured genome-wide differential allelic occupancy of 24 TFs and EP300 in a human lymphoblastoid cell line GM12878. Overall, 5% of human TF binding sites have an allelic imbalance in occupancy. At many sites, TFs clustered in TF-binding hubs on the same homolog in especially open chromatin. While genetic variation in core TF binding motifs generally resulted in large allelic differences in TF occupancy, most allelic differences in occupancy were subtle and associated with disruption of weak or noncanonical motifs. We also measured genome-wide differential allelic expression of genes with and without heterozygous exonic variants in the same cells. We found that genes with differential allelic expression were overall less expressed both in GM12878 cells and in unrelated human cell lines. Comparing TF occupancy with expression, we found strong association between allelic occupancy and expression within 100 bp of transcription start sites (TSSs), and weak association up to 100 kb from TSSs. Sites of differential allelic occupancy were significantly enriched for variants associated with disease, particularly autoimmune disease, suggesting that allelic differences in TF occupancy give functional insights into intergenic variants associated with disease. Our results have the potential to increase the power and interpretability of association studies by targeting functional intergenic variants in addition to protein coding sequences.Additional information
Reddy_2012_Suppl_Fig.pdf Reddy_Supplementary_Figure_6.pdf Reddy_SuppMeth_ForSubmission.pdf -
Reesink, G. (2010). The difference a word makes. In K. A. McElhannon, & G. Reesink (
Eds. ), A mosaic of languages and cultures: Studies celebrating the career of Karl J. Franklin (pp. 434-446). Dallas, TX: SIL International.Abstract
This paper offers some thoughts on the question what effect language has on the understanding and hence behavior of a human being. It reviews some issues of linguistic relativity, known as the “Sapir-Whorf hypothesis,” suggesting that the culture we grow up in is reflected in the language and that our cognition (and our worldview) is shaped or colored by the conventions developed by our ancestors and peers. This raises questions for the degree of translatability, illustrated by the comparison of two poems by a Dutch poet who spent most of his life in the USA. Mutual understanding, I claim, is possible because we have the cognitive apparatus that allows us to enter different emic systems. -
Reesink, G., & Dunn, M. (2012). Systematic typological comparison as a tool for investigating language history. Language Documentation and Conservation, (5), 34-71. Retrieved from http://hdl.handle.net/10125/4560.
-
Reesink, G. (2010). Prefixation of arguments in West Papuan languages. In M. Ewing, & M. Klamer (
Eds. ), East Nusantara, typological and areal analyses (pp. 71-95). Canberra: Pacific Linguistics. -
Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
-
Reinisch, E. (2010). Processing the fine temporal structure of spoken words. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Reinisch, E., & Weber, A. (2012). Adapting to suprasegmental lexical stress errors in foreign-accented speech. Journal of the Acoustical Society of America, 132, 1165-1176. doi:10.1121/1.4730884.
Abstract
Can native listeners rapidly adapt to suprasegmental mispronunciations in foreign-accented speech? To address this question, an exposure-test paradigm was used to test whether Dutch listeners can improve their understanding of non-canonical lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard a Dutch story with only initially stressed words, whereas another group also heard 28 words with canonical second-syllable stress (e.g., EEKhorn, "squirrel" was replaced by koNIJN "rabbit"; capitals indicate stress). The 28 words, however, were non-canonically marked by the Hungarian speaker with high pitch and amplitude on the initial syllable, both of which are stress cues in Dutch. After exposure, listeners' eye movements were tracked to Dutch target-competitor pairs with segmental overlap but different stress patterns, while they listened to new words from the same Hungarian speaker (e.g., HERsens, herSTEL, "brain," "recovery"). Listeners who had previously heard non-canonically produced words distinguished target-competitor pairs better than listeners who had only been exposed to Hungarian accent with canonical forms of lexical stress. Even a short exposure thus allows listeners to tune into speaker-specific realizations of words' suprasegmental make-up, and use this information for word recognition. -
Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.
Abstract
For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal. -
Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 114). York: University of York.
Abstract
To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146. -
Reis, A., Petersson, K. M., & Faísca, L. (2010). Neuroplasticidade: Os efeitos de aprendizagens específicas no cérebro humano. In C. Nunes, & S. N. Jesus (
Eds. ), Temas actuais em Psicologia (pp. 11-26). Faro: Universidade do Algarve. -
Reis, A., Faísca, L., Castro, S.-L., & Petersson, K. M. (2010). Preditores da leitura ao longo da escolaridade: Um estudo com alunos do 1 ciclo do ensino básico. In Actas do VII simpósio nacional de investigação em psicologia (pp. 3117-3132).
Abstract
A aquisição da leitura decorre ao longo de diversas etapas, desde o momento em que a criança inicia o contacto com o alfabeto até ao momento em que se torna um leitor competente, apto a ler correcta e fluentemente. Compreender a evolução desta competência através de uma análise da diferenciação do peso de variáveis preditoras da leitura possibilita teorizar sobre os mecanismos cognitivos envolvidos nas diferentes fases de desenvolvimento da leitura. Realizámos um estudo transversal com 568 alunos do segundo ao quarto ano do primeiro ciclo do Ensino Básico, em que se avaliou o impacto de capacidades de processamento fonológico, nomeação rápida, conhecimento letra-som e vocabulário, bem como de capacidades cognitivas mais gerais (inteligência não-verbal e memória de trabalho), na exactidão e velocidade da leitura. De uma forma geral, os resultados mostraram que, apesar da consciência fonológica permanecer como o preditor mais importante da exactidão e fluência da leitura, o seu peso decresce à medida que a escolaridade aumenta. Observou-se também que, à medida que o contributo da consciência fonológica para a explicação da velocidade de leitura diminuía, aumentava o contributo de outras variáveis mais associadas ao automatismo e reconhecimento lexical, tais como a nomeação rápida e o vocabulário. Em suma, podemos dizer que ao longo da escolaridade se observa uma alteração dinâmica dos processos cognitivos subjacentes à leitura, o que sugere que a criança evolui de uma estratégia de leitura ancorada em processamentos sub-lexicais, e como tal mais dependente de processamentos fonológicos, para uma estratégia baseada no reconhecimento ortográfico das palavras. -
Relton, C. L., Groom, A., St Pourcain, B., Sayers, A. E., Swan, D. C., Embleton, N. D., Pearce, M. S., Ring, S. M., Northstone, K., Tobias, J. H., Trakalo, J., Ness, A. R., Shaheen, S. O., & Davey Smith, G. (2012). DNA Methylation Patterns in Cord Blood DNA and Body Size in Childhood. PLoS ONE, 7(3): e31821. doi:10.1371/journal.pone.0031821.
Abstract
BACKGROUND: Epigenetic markings acquired in early life may have phenotypic consequences later in development through their role in transcriptional regulation with relevance to the developmental origins of diseases including obesity. The goal of this study was to investigate whether DNA methylation levels at birth are associated with body size later in childhood. PRINCIPAL FINDINGS: A study design involving two birth cohorts was used to conduct transcription profiling followed by DNA methylation analysis in peripheral blood. Gene expression analysis was undertaken in 24 individuals whose biological samples and clinical data were collected at a mean ± standard deviation (SD) age of 12.35 (0.95) years, the upper and lower tertiles of body mass index (BMI) were compared with a mean (SD) BMI difference of 9.86 (2.37) kg/m(2). This generated a panel of differentially expressed genes for DNA methylation analysis which was then undertaken in cord blood DNA in 178 individuals with body composition data prospectively collected at a mean (SD) age of 9.83 (0.23) years. Twenty-nine differentially expressed genes (>}1.2-fold and p{<10(-4)) were analysed to determine DNA methylation levels at 1-3 sites per gene. Five genes were unmethylated and DNA methylation in the remaining 24 genes was analysed using linear regression with bootstrapping. Methylation in 9 of the 24 (37.5%) genes studied was associated with at least one index of body composition (BMI, fat mass, lean mass, height) at age 9 years, although only one of these associations remained after correction for multiple testing (ALPL with height, p(Corrected) = 0.017). CONCLUSIONS: DNA methylation patterns in cord blood show some association with altered gene expression, body size and composition in childhood. The observed relationship is correlative and despite suggestion of a mechanistic epigenetic link between in utero life and later phenotype, further investigation is required to establish causality.Additional information
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0031821#s5 -
Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.
Abstract
The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats. -
Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.
Abstract
The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting. -
Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.
Abstract
This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2). -
Roberson, D., Kikutani, M., Döge, P., Whitaker, L., & Majid, A. (2012). Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition, 125, 195-206. doi:10.1016/j.cognition.2012.06.018.
Abstract
Three studies investigated developmental changes in facial expression processing, between 3years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed better classification of expressions of faces wearing sunglasses than children who saw the same faces un-occluded. When the mouth area was occluded with a mask, children under nine years showed no impairment in expression classification, relative to un-occluded faces. An early selective focus of attention on the eyes may be optimal for socialization, but mediate against accurate expression classification. The data support a model in which a threshold level of attentional control must be reached before children can develop adult-like configural processing skills and be flexible in their use of face- processing strategies. -
Roberts, L., & Meyer, A. S. (
Eds. ). (2012). Individual differences in second language acquisition [Special Issue]. Language Learning, 62(Supplement S2). -
Roberts, L., & Meyer, A. S. (2012). Individual differences in second language learning: Introduction. Language Learning, 62(Supplement S2), 1-4. doi:10.1111/j.1467-9922.2012.00703.x.
Abstract
First paragraph: The topic of the workshop from which this volume comes, “Individual Differences in Second Language Learning,” is timely and important for both practical and theoretical reasons. The practical reasons are obvious: While many people have some knowledge of a second or further language, there is enormous variability in how well they know these languages. Much of this variability is, of course, likely to be due to differences in the time spent studying or being immersed in the language, but even in similar learning environments learners differ greatly in how quickly they pick up a language and in their ultimate level of proficiency. -
Roberts, L. (2012). Individual differences in second language sentence processing. Language Learning, 62(Supplement S2), 172-188. doi:10.1111/j.1467-9922.2012.00711.x.
Abstract
As is the case in traditional second language (L2) acquisition research, a major question in the field of L2 real-time sentence processing is the extent to which L2 learners process the input like native speakers. Where differences are observed, the underlying causes could be the influence of the learner's first language and/or differences (fundamental or not) in the use of processing strategies between learners and native speakers. Another factor that may account for L1–L2 differences, perhaps in combination with others, is individual variability in general levels of proficiency or in learners’ general cognitive capacities, such as working memory and processing speed. However, systematic research into the effects of such individual differences on L2 real-time sentence processing has yet to be done because researchers in the main attempt to control for individual differences in general cognitive capacities rather than to investigate them in their own right: nevertheless, a review of the current work on L2 sentence and discourse processing raises some interesting findings. An overview of this research is presented in this paper, highlighting what appear to be the circumstances under which individual differences in factors such as working memory capacity and proficiency do or do not affect L2 sentence processing. Taken together, the data suggest that it is only under certain experimental circumstances—specifically, when participants are asked to perform a metalinguistic task directing their attention to the manipulation at the same time as comprehending the input—that individual differences in such factors as insufficient L2 proficiency and/or cognitive processing limitations, like speed and working memory influence L2 learners’ real-time processing of the target input. Under these circumstances, L2 learners of for instance, a higher working memory capacity or greater proficiency are more likely to process the input like native speakers. Otherwise, learners appear to shallow process the input, irrespective of individual variability. -
Roberts, L., Howard, M., O'Laorie, M., & Singleton, D. (
Eds. ). (2010). EUROSLA Yearbook 10. Amsterdam: John Benjamins.Abstract
The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English. -
Roberts, L. (2010). Parsing the L2 input, an overview: Investigating L2 learners’ processing of syntactic ambiguities and dependencies in real-time comprehension. In G. D. Véronique (
Ed. ), Language, Interaction and Acquisition [Special issue] (pp. 189-205). Amsterdam: Benjamins.Abstract
The acquisition of second language (L2) syntax has been central to the study of L2 acquisition, but recently there has been an interest in how learners apply their L2 syntactic knowledge to the input in real-time comprehension. Investigating L2 learners’ moment-by-moment syntactic analysis during listening or reading of sentence as it unfolds — their parsing of the input — is important, because language learning involves both the acquisition of knowledge and the ability to use it in real time. Using methods employed in monolingual processing research, investigations often focus on the processing of temporary syntactic ambiguities and structural dependencies. Investigating ambiguities involves examining parsing decisions at points in a sentence where there is a syntactic choice and this can offer insights into the nature of the parsing mechanism, and in particular, its processing preferences. Studying the establishment of syntactic dependencies at the critical point in the input allows for an investigation of how and when different kinds of information (e.g., syntactic, semantic, pragmatic) are put to use in real-time interpretation. Within an L2 context, further questions are of interest and familiar from traditional L2 acquisition research. Specifically, how native-like are the parsing procedures that L2 learners apply when processing the L2 input? What is the role of the learner’s first language (L1)? And, what are the effects of individual factors such as age, proficiency/dominance and working memory on L2 parsing? In the current paper I will provide an overview of the findings of some experimental research designed to investigate these questions. -
Roberts, L. (2012). Sentence and discourse processing in second language comprehension. In C. A. Chapelle (
Ed. ), Encyclopedia of Applied Linguistics. Chicester: Wiley-Blackwell. doi:10.1002/9781405198431.wbeal1063.Abstract
n applied linguistics (AL), researchers have always been concerned with second language (L2) learners' knowledge of the target language (TL), investigating the development of TL grammar, vocabulary, and phonology, for instance.
Share this page