Displaying 401 - 500 of 652
-
Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.
Abstract
Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks. -
Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.
Abstract
Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.Files private
Request files -
Offrede, T., Mishra, C., Skantze, G., Fuchs, S., & Mooshammer, C. (2023). Do Humans Converge Phonetically When Talking to a Robot? In R. Skarnitzl, & J. Volin (
Eds. ), Proceedings of the 20th International Congress of Phonetic Sciences (pp. 3507-3511). Prague: GUARANT International.Abstract
Phonetic convergence—i.e., adapting one’s speech
towards that of an interlocutor—has been shown
to occur in human-human conversations as well as
human-machine interactions. Here, we investigate
the hypothesis that human-to-robot convergence is
influenced by the human’s perception of the robot
and by the conversation’s topic. We conducted a
within-subjects experiment in which 33 participants
interacted with two robots differing in their eye gaze
behavior—one looked constantly at the participant;
the other produced gaze aversions, similarly to a
human’s behavior. Additionally, the robot asked
questions with increasing intimacy levels.
We observed that the speakers tended to converge
on F0 to the robots. However, this convergence
to the robots was not modulated by how the
speakers perceived them or by the topic’s intimacy.
Interestingly, speakers produced lower F0 means
when talking about more intimate topics. We
discuss these findings in terms of current theories of
conversational convergence. -
Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.
Abstract
Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.Additional information
supplementary movies and figures -
Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.
Abstract
Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words. -
Ortega, G., & Morgan, G. (2010). Comparing child and adult development of a visual phonological system. Language interaction and acquisition, 1(1), 67-81. doi:10.1075/lia.1.1.05ort.
Abstract
Research has documented systematic articulation differences in young children’s first signs compared with the adult input. Explanations range from the implementation of phonological processes, cognitive limitations and motor immaturity. One way of disentangling these possible explanations is to investigate signing articulation in adults who do not know any sign language but have mature cognitive and motor development. Some preliminary observations are provided on signing accuracy in a group of adults using a sign repetition methodology. Adults make the most errors with marked handshapes and produce movement and location errors akin to those reported for child signers. Secondly, there are both positive and negative influences of sign iconicity on sign repetition in adults. Possible reasons are discussed for these iconicity effects based on gesture. -
Ortega, G. (2010). MSJE TXT: Un evento social. Lectura y vida: Revista latinoamericana de lectura, 4, 44-53.
-
Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).
Abstract
Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one. -
Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.
Abstract
Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message. -
Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.
Abstract
Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
might arise in both modalities into account before it can generalize over the human language faculty. -
Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (
Eds. ), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer. -
Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.
Abstract
Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
consistency and comparability among studies, and should be addressed in future investigations.Additional information
supplementary information prisma checklist peak coordinates 1 peak coordinates 2 -
Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.
Abstract
For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.Additional information
Supporting Information -
Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.
Abstract
Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats. -
Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.
Abstract
Background
Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
Methods
Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
Results
The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
Conclusions
Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
-
Pereira Soares, S. M., Chaouch-Orozco, A., & González Alonso, J. (2023). Innovations and challenges in acquisition and processing methodologies for L3/Ln. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (
Eds. ), The Cambridge handbook of third language acquisition (pp. 661-682). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.026.Abstract
The advent of psycholinguistic and neurolinguistic methodologies has provided new insights into theories of language acquisition. Sequential multilingualism is no exception, and some of the most recent work on the subject has incorporated a particular focus on language processing. This chapter surveys some of the work on the processing of lexical and morphosyntactic aspects of third or further languages, with different offline and online methodologies. We also discuss how, while increasingly sophisticated techniques and experimental designs have improved our understanding of third language acquisition and processing, simpler but clever designs can answer pressing questions in our theoretical debate. We provide examples of both sophistication and clever simplicity in experimental design, and argue that the field would benefit from incorporating a combination of both concepts into future work. -
Perniss, P. M., Thompson, R. L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages [Review article]. Frontiers in Psychology, 1, E227. doi:10.3389/fpsyg.2010.00227.
Abstract
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor and perceptual experience. -
Petrich, P., Piedrasanta, R., Figuerola, H., & Le Guen, O. (2010). Variantes y variaciones en la percepción de los antepasados entre los Mayas. In A. Monod Becquelin, A. Breton, & M. H. Ruz (
Eds. ), Figuras Mayas de la diversidad (pp. 255-275). Mérida, Mexico: Universidad autónoma de México. -
Petrovic, P., Kalso, E., Petersson, K. M., Andersson, J., Fransson, P., & Ingvar, M. (2010). A prefrontal non-opioid mechanism in placebo analgesia. Pain, 150, 59-65. doi:10.1016/j.pain.2010.03.011.
Abstract
ehavioral studies have suggested that placebo analgesia is partly mediated by the endogenous opioid system. Expanding on these results we have shown that the opioid-receptor-rich rostral anterior cingulate cortex (rACC) is activated in both placebo and opioid analgesia. However, there are also differences between the two treatments. While opioids have direct pharmacological effects, acting on the descending pain inhibitory system, placebo analgesia depends on neocortical top-down mechanisms. An important difference may be that expectations are met to a lesser extent in placebo treatment as compared with a specific treatment, yielding a larger error signal. As these processes previously have been shown to influence other types of perceptual experiences, we hypothesized that they also may drive placebo analgesia. Imaging studies suggest that lateral orbitofrontal cortex (lObfc) and ventrolateral prefrontal cortex (vlPFC) are involved in processing expectation and error signals. We re-analyzed two independent functional imaging experiments related to placebo analgesia and emotional placebo to probe for a differential processing in these regions during placebo treatment vs. opioid treatment and to test if this activity is associated with the placebo response. In the first dataset lObfc and vlPFC showed an enhanced activation in placebo analgesia vs. opioid analgesia. Furthermore, the rACC activity co-varied with the prefrontal regions in the placebo condition specifically. A similar correlation between rACC and vlPFC was reproduced in another dataset involving emotional placebo and correlated with the degree of the placebo effect. Our results thus support that placebo is different from specific treatment with a prefrontal top-down influence on rACC. -
Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.
Abstract
Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.Additional information
data -
Pijnacker, J. (2010). Defeasible inference in autism: A behavioral and electrophysiological approach. PhD Thesis, Radboud University Nijmegen, Nijmegen.
-
Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism. Neuropsychologia, 48, 2940-2951. doi:10.1016/j.neuropsychologia.2010.06.003.
Abstract
Several studies have demonstrated that people with ASD and intact language skills still have problems processing linguistic information in context. Given this evidence for reduced sensitivity to linguistic context, the question arises how contextual information is actually processed by people with ASD. In this study, we used event-related brain potentials (ERPs) to examine context sensitivity in high-functioning adults with autistic disorder (HFA) and Asperger syndrome at two levels: at the level of sentence processing and at the level of solving reasoning problems. We found that sentence context as well as reasoning context had an immediate ERP effect in adults with Asperger syndrome, as in matched controls. Both groups showed a typical N400 effect and a late positive component for the sentence conditions, and a sustained negativity for the reasoning conditions. In contrast, the HFA group demonstrated neither an N400 effect nor a sustained negativity. However, the HFA group showed a late positive component which was larger for semantically anomalous sentences than congruent sentences. Because sentence context had a modulating effect in a later phase, semantic integration is perhaps less automatic in HFA, and presumably more elaborate processes are needed to arrive at a sentence interpretation. -
Pillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J. and 10 morePillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J., Ring, S. M., Deng, G., Zhang, W., McCarthy, M. I., Deloukas, P., Peltonen, L., Elliott, P., Coin, L. J. M., Smith, G. D., & Jarvelin, M.-R. (2010). Genome-wide association study reveals multiple loci associated with primary tooth development during infancy. PLoS Genetics, 6(2): e1000856. doi:10.1371/journal.pgen.1000856.
Abstract
Tooth development is a highly heritable process which relates to other growth and developmental processes, and which interacts with the development of the entire craniofacial complex. Abnormalities of tooth development are common, with tooth agenesis being the most common developmental anomaly in humans. We performed a genome-wide association study of time to first tooth eruption and number of teeth at one year in 4,564 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966) and 1,518 individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC). We identified 5 loci at P<}5x10(-8), and 5 with suggestive association (P{<5x10(-6)). The loci included several genes with links to tooth and other organ development (KCNJ2, EDA, HOXB2, RAD51L1, IGF2BP1, HMGA2, MSRB3). Genes at four of the identified loci are implicated in the development of cancer. A variant within the HOXB gene cluster associated with occlusion defects requiring orthodontic treatment by age 31 years.Additional information
http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000856#s5 -
Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (
Eds. ), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter. -
St Pourcain, B., Wang, K., Glessner, J. T., Golding, J., Steer, C., Ring, S. M., Skuse, D. H., Grant, S. F. A., Hakonarson, H., & Davey Smith, G. (2010). Association Between a High-Risk Autism Locus on 5p14 and Social Communication Spectrum Phenotypes in the General Population. American Journal of Psychiatry, 167(11), 1364-1372. doi:10.1176/appi.ajp.2010.09121789.
Abstract
Objective: Recent genome-wide analysis identified a genetic variant on 5p14.1 (rs4307059), which is associated with risk for autism spectrum disorder. This study investigated whether rs4307059 also operates as a quantitative trait locus underlying a broader autism phenotype in the general population, focusing specifically on the social communication aspect of the spectrum. Method: Study participants were 7,313 children from the Avon Longitudinal Study of Parents and Children. Single-trait and joint-trait genotype associations were investigated for 29 measures related to language and communication, verbal intelligence, social interaction, and behavioral adjustment, assessed between ages 3 and 12 years. Analyses were performed in one-sided or directed mode and adjusted for multiple testing, trait interrelatedness, and random genotype dropout. Results: Single phenotype analyses showed that an increased load of rs4307059 risk allele is associated with stereotyped conversation and lower pragmatic communication skills, as measured by the Children's Communication Checklist (at a mean age of 9.7 years). In addition a trend toward a higher frequency of identification of special educational needs (at a mean age of 11.8 years) was observed. Variation at rs4307059 was also associated with the phenotypic profile of studied traits. This joint signal was fully explained neither by single-trait associations nor by overall behavioral adjustment problems but suggested a combined effect, which manifested through multiple sub-threshold social, communicative, and cognitive impairments. Conclusions: Our results suggest that common variation at 5p14.1 is associated with social communication spectrum phenotypes in the general population and support the role of rs4307059 as a quantitative trait locus for autism spectrum disorder.Additional information
http://ajp.psychiatryonline.org/doi/suppl/10.1176/appi.ajp.2010.09121789 -
Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2010). The type of shared activity shapes caregiver and infant communication. Gesture, 10(2/3), 279-297. doi:10.1075/gest.10.2-3.08puc.
Abstract
For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language. -
Pyykkönen, P., & Järvikivi, J. (2010). Activation and persistence of implicit causality information in spoken language comprehension. Experimental Psychology, 57, 5-16. doi:10.1027/1618-3169/a000002.
Abstract
A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account (Greene & McKoon, 1995; Koornneef & Van Berkum, 2006; Van Berkum, Koornneef, Otten, & Nieuwland, 2007). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus. -
Pyykkönen, P., Matthews, D., & Järvikivi, J. (2010). Three-year-olds are sensitive to semantic prominence during online spoken language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25, 115 -129. doi:10.1080/01690960902944014.
Abstract
Recent evidence from adult pronoun comprehension suggests that semantic factors such as verb transitivity affect referent salience and thereby anaphora resolution. We tested whether the same semantic factors influence pronoun comprehension in young children. In a visual world study, 3-year-olds heard stories that began with a sentence containing either a high or a low transitivity verb. Looking behaviour to pictures depicting the subject and object of this sentence was recorded as children listened to a subsequent sentence containing a pronoun. Children showed a stronger preference to look to the subject as opposed to the object antecedent in the low transitivity condition. In addition there were general preferences (1) to look to the subject in both conditions and (2) to look more at both potential antecedents in the high transitivity condition. This suggests that children, like adults, are affected by semantic factors, specifically semantic prominence, when interpreting anaphoric pronouns. -
Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.
Abstract
Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits. -
Raghavan, R., Raviv, L., & Peeters, D. (2023). What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition, 240: 105581. doi:10.1016/j.cognition.2023.105581.
Abstract
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself. -
Raimondi, T., Di Panfilo, G., Pasquali, M., Zarantonello, M., Favaro, L., Savini, T., Gamba, M., & Ravignani, A. (2023). Isochrony and rhythmic interaction in ape duetting. Proceedings of the Royal Society B: Biological Sciences, 290: 20222244. doi:10.1098/rspb.2022.2244.
Abstract
How did rhythm originate in humans, and other species? One cross-cultural universal, frequently found in human music, is isochrony: when note onsets repeat regularly like the ticking of a clock. Another universal consists in synchrony (e.g. when individuals coordinate their notes so that they are sung at the same time). An approach to biomusicology focuses on similarities and differences across species, trying to build phylogenies of musical traits. Here we test for the presence of, and a link between, isochrony and synchrony in a non-human animal. We focus on the songs of one of the few singing primates, the lar gibbon (Hylobates lar), extracting temporal features from their solo songs and duets. We show that another ape exhibits one rhythmic feature at the core of human musicality: isochrony. We show that an enhanced call rate overall boosts isochrony, suggesting that respiratory physiological constraints play a role in determining the song's rhythmic structure. However, call rate alone cannot explain the flexible isochrony we witness. Isochrony is plastic and modulated depending on the context of emission: gibbons are more isochronous when duetting than singing solo. We present evidence for rhythmic interaction: we find statistical causality between one individual's note onsets and the co-singer's onsets, and a higher than chance degree of synchrony in the duets. Finally, we find a sex-specific trade-off between individual isochrony and synchrony. Gibbon's plasticity for isochrony and rhythmic overlap may suggest a potential shared selective pressure for interactive vocal displays in singing primates. This pressure may have convergently shaped human and gibbon musicality while acting on a common neural primate substrate. Beyond humans, singing primates are promising models to understand how music and, specifically, a sense of rhythm originated in the primate phylogeny. -
Rapold, C. J. (2010). Beneficiary and other roles of the dative in Tashelhiyt. In F. Zúñiga, & S. Kittilä (
Eds. ), Benefactives and malefactives: Typological perspectives and case studies (pp. 351-376). Amsterdam: Benjamins.Abstract
This paper explores the semantics of the dative in Tashelhiyt, a Berber language from Morocco. After a brief morphosyntactic overview of the dative in this language, I identify a wide range of its semantic roles, including possessor, experiencer, distributive and unintending causer. I arrange these roles in a semantic map and propose semantic links between the roles such as metaphorisation and generalisation. In the light of the Tashelhiyt data, the paper also proposes additions to previous semantic maps of the dative (Haspelmath 1999, 2003) and to Kittilä’s 2005 typology of beneficiary coding. -
Rapold, C. J. (2010). Defining converbs ten years on - A hitchhikers'guide. In S. Völlmin, A. Amha, C. J. Rapold, & S. Zaugg-Coretti (
Eds. ), Converbs, medial verbs, clause chaining and related issues (pp. 7-30). Köln: Rüdiger Köppe Verlag. -
Rasenberg, M. (2023). Mutual understanding from a multimodal and interactional perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Rasenberg, M., Amha, A., Coler, M., van Koppen, M., van Miltenburg, E., de Rijk, L., Stommel, W., & Dingemanse, M. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans. Linguistics in the Netherlands, 40, 309-317. doi:10.1075/avt.00095.ras.
Abstract
What is language and who or what can be said to have it? In this essay we consider this question in the context of interactions with non-humans, specifically: animals and computers. While perhaps an odd pairing at first glance, here we argue that these domains can offer contrasting perspectives through which we can explore and reimagine language. The interactions between humans and animals, as well as between humans and computers, reveal both the essence and the boundaries of language: from examining the role of sequence and contingency in human-animal interaction, to unravelling the challenges of natural interactions with “smart” speakers and language models. By bringing together disparate fields around foundational questions, we push the boundaries of linguistic inquiry and uncover new insights into what language is and how it functions in diverse non-humanexclusive contexts. -
Ravignani, A., & Herbst, C. T. (2023). Voices in the ocean: Toothed whales evolved a third way of making sounds similar to that of land mammals and birds. Science, 379(6635), 881-882. doi:10.1126/science.adg5256.
-
Raviv, L., & Kirby, S. (2023). Self domestication and the cultural evolution of language. In J. J. Tehrani, J. Kendal, & R. Kendal (
Eds. ), The Oxford Handbook of Cultural Evolution. Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198869252.013.60.Abstract
The structural design features of human language emerge in the process of cultural evolution, shaping languages over the course of communication, learning, and transmission. What role does this leave biological evolution? This chapter highlights the biological bases and preconditions that underlie the particular type of prosocial behaviours and cognitive inference abilities that are required for languages to emerge via cultural evolution to begin with. -
Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2023). Elephants as an animal model for self-domestication. Proceedings of the National Academy of Sciences of the United States of America, 120(15): e2208607120. doi:10.1073/pnas.2208607120.
Abstract
Humans are unique in their sophisticated culture and societal structures, their complex languages, and their extensive tool use. According to the human self-domestication hypothesis, this unique set of traits may be the result of an evolutionary process of self-induced domestication, in which humans evolved to be less aggressive and more cooperative. However, the only other species that has been argued to be self-domesticated besides humans so far is bonobos, resulting in a narrow scope for investigating this theory limited to the primate order. Here, we propose an animal model for studying self-domestication: the elephant. First, we support our hypothesis with an extensive cross-species comparison, which suggests that elephants indeed exhibit many of the features associated with self-domestication (e.g., reduced aggression, increased prosociality, extended juvenile period, increased playfulness, socially regulated cortisol levels, and complex vocal behavior). Next, we present genetic evidence to reinforce our proposal, showing that genes positively selected in elephants are enriched in pathways associated with domestication traits and include several candidate genes previously associated with domestication. We also discuss several explanations for what may have triggered a self-domestication process in the elephant lineage. Our findings support the idea that elephants, like humans and bonobos, may be self-domesticated. Since the most recent common ancestor of humans and elephants is likely the most recent common ancestor of all placental mammals, our findings have important implications for convergent evolution beyond the primate taxa, and constitute an important advance toward understanding how and why self-domestication shaped humans’ unique cultural niche.Additional information
supporting information -
Reesink, G. (2010). The difference a word makes. In K. A. McElhannon, & G. Reesink (
Eds. ), A mosaic of languages and cultures: Studies celebrating the career of Karl J. Franklin (pp. 434-446). Dallas, TX: SIL International.Abstract
This paper offers some thoughts on the question what effect language has on the understanding and hence behavior of a human being. It reviews some issues of linguistic relativity, known as the “Sapir-Whorf hypothesis,” suggesting that the culture we grow up in is reflected in the language and that our cognition (and our worldview) is shaped or colored by the conventions developed by our ancestors and peers. This raises questions for the degree of translatability, illustrated by the comparison of two poems by a Dutch poet who spent most of his life in the USA. Mutual understanding, I claim, is possible because we have the cognitive apparatus that allows us to enter different emic systems. -
Reesink, G. (2010). Prefixation of arguments in West Papuan languages. In M. Ewing, & M. Klamer (
Eds. ), East Nusantara, typological and areal analyses (pp. 71-95). Canberra: Pacific Linguistics. -
Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
-
Reinisch, E. (2010). Processing the fine temporal structure of spoken words. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.
Abstract
For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal. -
Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 114). York: University of York.
Abstract
To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146. -
Reis, A., Petersson, K. M., & Faísca, L. (2010). Neuroplasticidade: Os efeitos de aprendizagens específicas no cérebro humano. In C. Nunes, & S. N. Jesus (
Eds. ), Temas actuais em Psicologia (pp. 11-26). Faro: Universidade do Algarve. -
Reis, A., Faísca, L., Castro, S.-L., & Petersson, K. M. (2010). Preditores da leitura ao longo da escolaridade: Um estudo com alunos do 1 ciclo do ensino básico. In Actas do VII simpósio nacional de investigação em psicologia (pp. 3117-3132).
Abstract
A aquisição da leitura decorre ao longo de diversas etapas, desde o momento em que a criança inicia o contacto com o alfabeto até ao momento em que se torna um leitor competente, apto a ler correcta e fluentemente. Compreender a evolução desta competência através de uma análise da diferenciação do peso de variáveis preditoras da leitura possibilita teorizar sobre os mecanismos cognitivos envolvidos nas diferentes fases de desenvolvimento da leitura. Realizámos um estudo transversal com 568 alunos do segundo ao quarto ano do primeiro ciclo do Ensino Básico, em que se avaliou o impacto de capacidades de processamento fonológico, nomeação rápida, conhecimento letra-som e vocabulário, bem como de capacidades cognitivas mais gerais (inteligência não-verbal e memória de trabalho), na exactidão e velocidade da leitura. De uma forma geral, os resultados mostraram que, apesar da consciência fonológica permanecer como o preditor mais importante da exactidão e fluência da leitura, o seu peso decresce à medida que a escolaridade aumenta. Observou-se também que, à medida que o contributo da consciência fonológica para a explicação da velocidade de leitura diminuía, aumentava o contributo de outras variáveis mais associadas ao automatismo e reconhecimento lexical, tais como a nomeação rápida e o vocabulário. Em suma, podemos dizer que ao longo da escolaridade se observa uma alteração dinâmica dos processos cognitivos subjacentes à leitura, o que sugere que a criança evolui de uma estratégia de leitura ancorada em processamentos sub-lexicais, e como tal mais dependente de processamentos fonológicos, para uma estratégia baseada no reconhecimento ortográfico das palavras. -
Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.
Abstract
The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats. -
Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.
Abstract
The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting. -
Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.
Abstract
This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2). -
Roberts, L., Howard, M., O'Laorie, M., & Singleton, D. (
Eds. ). (2010). EUROSLA Yearbook 10. Amsterdam: John Benjamins.Abstract
The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English. -
Roberts, L. (2010). Parsing the L2 input, an overview: Investigating L2 learners’ processing of syntactic ambiguities and dependencies in real-time comprehension. In G. D. Véronique (
Ed. ), Language, Interaction and Acquisition [Special issue] (pp. 189-205). Amsterdam: Benjamins.Abstract
The acquisition of second language (L2) syntax has been central to the study of L2 acquisition, but recently there has been an interest in how learners apply their L2 syntactic knowledge to the input in real-time comprehension. Investigating L2 learners’ moment-by-moment syntactic analysis during listening or reading of sentence as it unfolds — their parsing of the input — is important, because language learning involves both the acquisition of knowledge and the ability to use it in real time. Using methods employed in monolingual processing research, investigations often focus on the processing of temporary syntactic ambiguities and structural dependencies. Investigating ambiguities involves examining parsing decisions at points in a sentence where there is a syntactic choice and this can offer insights into the nature of the parsing mechanism, and in particular, its processing preferences. Studying the establishment of syntactic dependencies at the critical point in the input allows for an investigation of how and when different kinds of information (e.g., syntactic, semantic, pragmatic) are put to use in real-time interpretation. Within an L2 context, further questions are of interest and familiar from traditional L2 acquisition research. Specifically, how native-like are the parsing procedures that L2 learners apply when processing the L2 input? What is the role of the learner’s first language (L1)? And, what are the effects of individual factors such as age, proficiency/dominance and working memory on L2 parsing? In the current paper I will provide an overview of the findings of some experimental research designed to investigate these questions. -
Roe, J. M., Vidal-Piñeiro, D., Amlien, I. K., Pan, M., Sneve, M. H., Thiebaut de Schotten, M., Friedrich, P., Sha, Z., Francks, C., Eilertsen, E. M., Wang, Y., Walhovd, K. B., Fjell, A. M., & Westerhausen, R. (2023). Tracing the development and lifespan change of population-level structural asymmetry in the cerebral cortex. eLife, 12: e84685. doi:10.7554/eLife.84685.
Abstract
Cortical asymmetry is a ubiquitous feature of brain organization that is altered in neurodevelopmental disorders and aging. Achieving consensus on cortical asymmetries in humans is necessary to uncover the genetic-developmental mechanisms that shape them and factors moderating cortical lateralization. Here, we delineate population-level asymmetry in cortical thickness and surface area vertex-wise in 7 datasets and chart asymmetry trajectories across life (4-89 years; observations = 3937; 70% longitudinal). We reveal asymmetry interrelationships, heritability, and test associations in UK Biobank (N=∼37,500). Cortical asymmetry was robust across datasets. Whereas areal asymmetry is predominantly stable across life, thickness asymmetry grows in development and declines in aging. Areal asymmetry correlates in specific regions, whereas thickness asymmetry is globally interrelated across cortex and suggests high directional variability in global thickness lateralization. Areal asymmetry is moderately heritable (max h2SNP ∼19%), and phenotypic correlations are reflected by high genetic correlations, whereas heritability of thickness asymmetry is low. Finally, we detected an asymmetry association with cognition and confirm recently-reported handedness links. Results suggest areal asymmetry is developmentally stable and arises in early life, whereas developmental changes in thickness asymmetry may lead to directional variability of global thickness lateralization. Our results bear enough reproducibility to serve as a standard for future brain asymmetry studies. -
Roll, P., Vernes, S. C., Bruneau, N., Cillario, J., Ponsole-Lenfant, M., Massacrier, A., Rudolf, G., Khalife, M., Hirsch, E., Fisher, S. E., & Szepetowski, P. (2010). Molecular networks implicated in speech-related disorders: FOXP2 regulates the SRPX2/uPAR complex. Human Molecular Genetics, 19, 4848-4860. doi:10.1093/hmg/ddq415.
Abstract
It is a challenge to identify the molecular networks contributing to the neural basis of human speech. Mutations in transcription factor FOXP2 cause difficulties mastering fluent speech (developmental verbal dyspraxia, DVD), while mutations of sushi-repeat protein SRPX2 lead to epilepsy of the rolandic (sylvian) speech areas, with DVD or with bilateral perisylvian polymicrogyria. Pathophysiological mechanisms driven by SRPX2 involve modified interaction with the plasminogen activator receptor (uPAR). Independent chromatin-immunoprecipitation microarray screening has identified the uPAR gene promoter as a potential target site bound by FOXP2. Here, we directly tested for the existence of a transcriptional regulatory network between human FOXP2 and the SRPX2/uPAR complex. In silico searches followed by gel retardation assays identified specific efficient FOXP2 binding sites in each of the promoter regions of SRPX2 and uPAR. In FOXP2-transfected cells, significant decreases were observed in the amounts of both SRPX2 (43.6%) and uPAR (38.6%) native transcripts. Luciferase reporter assays demonstrated that FOXP2 expression yielded marked inhibition of SRPX2 (80.2%) and uPAR (77.5%) promoter activity. A mutant FOXP2 that causes DVD (p.R553H) failed to bind to SRPX2 and uPAR target sites, and showed impaired down-regulation of SRPX2 and uPAR promoter activity. In a patient with polymicrogyria of the left rolandic operculum, a novel FOXP2 mutation (p.M406T) was found in the leucine-zipper (dimerization) domain. p.M406T partially impaired FOXP2 regulation of SRPX2 promoter activity, while that of the uPAR promoter remained unchanged. Together with recently described FOXP2-CNTNPA2 and SRPX2/uPAR links, the FOXP2-SRPX2/uPAR network provides exciting insights into molecular pathways underlying speech-related disorders.Additional information
Roll_et_al_2010_Suppl_Material.doc -
Roos, N. M., Takashima, A., & Piai, V. (2023). Functional neuroanatomy of lexical access in contextually and visually guided spoken word production. Cortex, 159, 254-267. doi:10.1016/j.cortex.2022.10.014.
Abstract
Lexical access is commonly studied using bare picture naming, which is visually guided, but in real-life conversation, lexical access is more commonly contextually guided. In this fMRI study, we examined the underlying functional neuroanatomy of contextually and visually guided lexical access, and its consistency across sessions. We employed a context-driven picture naming task with fifteen healthy speakers reading incomplete sentences (word-by-word) and subsequently naming the picture depicting the final word. Sentences provided either a constrained or unconstrained lead–in setting for the picture to be named, thereby approximating lexical access in natural language use. The picture name could be planned either through sentence context (constrained) or picture appearance (unconstrained). This procedure was repeated in an equivalent second session two to four weeks later with the same sample to test for test-retest consistency. Picture naming times showed a strong context effect, confirming that constrained sentences speed up production of the final word depicted as an image. fMRI results showed that the areas common to contextually and visually guided lexical access were left fusiform and left inferior frontal gyrus (both consistently active across-sessions), and middle temporal gyrus. However, non-overlapping patterns were also found, notably in the left temporal and parietal cortices, suggesting a different neural circuit for contextually versus visually guided lexical access.Additional information
supplementary material -
Rossano, F. (2010). Questioning and responding in Italian. Journal of Pragmatics, 42, 2756-2771. doi:10.1016/j.pragma.2010.04.010.
Abstract
Questions are design problems for both the questioner and the addressee. They must be produced as recognizable objects and must be comprehended by taking into account the context in which they occur and the local situated interests of the participants. This paper investigates how people do ‘questioning’ and ‘responding’ in Italian ordinary conversations. I focus on the features of both questions and responses. I first discuss formal linguistic features that are peculiar to questions in terms of intonation contours (e.g. final rise), morphology (e.g. tags and question words) and syntax (e.g. inversion). I then show additional features that characterize their actual implementation in conversation such as their minimality (often the subject or the verb is only implied) and the usual occurrence of speaker gaze towards the recipient during questions. I then look at which social actions (e.g. requests for information, requests for confirmation) the different question types implement and which responses are regularly produced in return. The data shows that previous descriptions of “interrogative markings” are neither adequate nor sufficient to comprehend the actual use of questions in natural conversation. -
Rossi, E., Pereira Soares, S. M., Prystauka, Y., Nakamura, M., & Rothman, J. (2023). Riding the (brain) waves! Using neural oscillations to inform bilingualism research. Bilingualism: Language and Cognition, 26(1), 202-215. doi:10.1017/S1366728922000451.
Abstract
The study of the brains’ oscillatory activity has been a standard technique to gain insights into human neurocognition for a relatively long time. However, as a complementary analysis to ERPs, only very recently has it been utilized to study bilingualism and its neural underpinnings. Here, we provide a theoretical and methodological starter for scientists in the (psycho)linguistics and neurocognition of bilingualism field(s) to understand the bases and applications of this analytical tool. Towards this goal, we provide a description of the characteristics of the human neural (and its oscillatory) signal, followed by an in-depth description of various types of EEG oscillatory analyses, supplemented by figures and relevant examples. We then utilize the scant, yet emergent, literature on neural oscillations and bilingualism to highlight the potential of how analyzing neural oscillations can advance our understanding of the (psycho)linguistic and neurocognitive understanding of bilingualism. -
Rossi, G., Dingemanse, M., Floyd, S., Baranova, J., Blythe, J., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2023). Shared cross-cultural principles underlie human prosocial behavior at the smallest scale. Scientific Reports, 13: 6057. doi:10.1038/s41598-023-30580-5.
Abstract
Prosociality and cooperation are key to what makes us human. But different cultural norms can shape our evolved capacities for interaction, leading to differences in social relations. How people share resources has been found to vary across cultures, particularly when stakes are high and when interactions are anonymous. Here we examine prosocial behavior among familiars (both kin and non-kin) in eight cultures on five continents, using video recordings of spontaneous requests for immediate, low-cost assistance (e.g., to pass a utensil). We find that, at the smallest scale of human interaction, prosocial behavior follows cross-culturally shared principles: requests for assistance are very frequent and mostly successful; and when people decline to give help, they normally give a reason. Although there are differences in the rates at which such requests are ignored, or require verbal acceptance, cultural variation is limited, pointing to a common foundation for everyday cooperation around the world.Additional information
Rossi et al. - 2023 - Supplementary materials.pdf -
Rossi, G. (2010). Interactive written discourse: Pragmatic aspects of SMS communication. In G. Garzone, P. Catenaccio, & C. Degano (
Eds. ), Diachronic perspectives on genres in specialized communication. Conference Proceedings (pp. 135-138). Milano: CUEM. -
Ruano, D., Abecasis, G. R., Glaser, B., Lips, E. S., Cornelisse, L. N., de Jong, A. P. H., Evans, D. M., Davey Smith, G., Timpson, N. J., Smit, A. B., Heutink, P., Verhage, M., & Posthuma, D. (2010). Functional gene group analysis reveals a role of synaptic heterotrimeric G proteins in cognitive ability. American Journal of Human Genetics, 86(2), 113-125. doi:10.1016/j.ajhg.2009.12.006.
Abstract
Although cognitive ability is a highly heritable complex trait, only a few genes have been identified, explaining relatively low proportions of the observed trait variation. This implies that hundreds of genes of small effect may be of importance for cognitive ability. We applied an innovative method in which we tested for the effect of groups of genes defined according to cellular function (functional gene group analysis). Using an initial sample of 627 subjects, this functional gene group analysis detected that synaptic heterotrimeric guanine nucleotide binding proteins (G proteins) play an important role in cognitive ability (P(EMP) = 1.9 x 10(-4)). The association with heterotrimeric G proteins was validated in an independent population sample of 1507 subjects. Heterotrimeric G proteins are central relay factors between the activation of plasma membrane receptors by extracellular ligands and the cellular responses that these induce, and they can be considered a point of convergence, or a "signaling bottleneck." Although alterations in synaptic signaling processes may not be the exclusive explanation for the association of heterotrimeric G proteins with cognitive ability, such alterations may prominently affect the properties of neuronal networks in the brain in such a manner that impaired cognitive ability and lower intelligence are observed. The reported association of synaptic heterotrimeric G proteins with cognitive ability clearly points to a new direction in the study of the genetic basis of cognitive ability.Additional information
http://www.sciencedirect.com/science/article/pii/S0002929709005679#appd002 -
Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. M., & Bekkering, H. (2010). The function of words: Distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22, 1844-1851. doi:10.1162/jocn.2009.21310.
Abstract
Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain. -
De Ruiter, L. E. (2010). Studies on intonation and information structure in child and adult German. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.
Abstract
Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication. -
Rutz, C., Bronstein, M., Raskin, A., Vernes, S. C., Zacarian, K., & Blasi, D. E. (2023). Using machine learning to decode animal communication. Science, 381(6654), 152-155. doi:10.1126/science.adg7314.
Abstract
The past few years have seen a surge of interest in using machine learning (ML) methods for studying the behavior of nonhuman animals (hereafter “animals”) (1). A topic that has attracted particular attention is the decoding of animal communication systems using deep learning and other approaches (2). Now is the time to tackle challenges concerning data availability, model validation, and research ethics, and to embrace opportunities for building collaborations across disciplines and initiatives. -
Ryskin, R., & Nieuwland, M. S. (2023). Prediction during language comprehension: What is next? Trends in Cognitive Sciences, 27(11), 1032-1052. doi:10.1016/j.tics.2023.08.003.
Abstract
Prediction is often regarded as an integral aspect of incremental language comprehension, but little is known about the cognitive architectures and mechanisms that support it. We review studies showing that listeners and readers use all manner of contextual information to generate multifaceted predictions about upcoming input. The nature of these predictions may vary between individuals owing to differences in language experience, among other factors. We then turn to unresolved questions which may guide the search for the underlying mechanisms. (i) Is prediction essential to language processing or an optional strategy? (ii) Are predictions generated from within the language system or by domain-general processes? (iii) What is the relationship between prediction and memory? (iv) Does prediction in comprehension require simulation via the production system? We discuss promising directions for making progress in answering these questions and for developing a mechanistic understanding of prediction in language. -
Sadakata, M., Van der Zanden, L., & Sekiyama, K. (2010). Influence of musical training on perception of L2 speech. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 118-121).
Abstract
The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels). Native Dutch and Japanese speakers with different musical training experience, matched for their estimated verbal IQ, participated in the experiments. Results indicated that musical training strongly increases one’s ability to perceive timing information in speech signals. We also found a benefit of musical training on discrimination performance for a subset of the tested vowel contrasts. -
Sajovic, J., Meglič, A., Corradi, Z., Khan, M., Maver, A., Vidmar, M. J., Hawlina, M., Cremers, F. P. M., & Fakin, A. (2023). ABCA4Variant c.5714+5G> A in trans with null alleles results in primary RPE damage. Investigative Opthalmology & Visual Science, 64(12): 33. doi:10.1167/iovs.64.12.33.
Abstract
Purpose: To determine the disease pathogenesis associated with the frequent ABCA4 variant c.5714+5G>A (p.[=,Glu1863Leufs*33]).
Methods: Patient-derived photoreceptor precursor cells were generated to analyze the effect of c.5714+5G>A on splicing and perform a quantitative analysis of c.5714+5G>A products. Patients with c.5714+5G>A in trans with a null allele (i.e., c.5714+5G>A patients; n = 7) were compared with patients with two null alleles (i.e., double null patients; n = 11); with a special attention to the degree of RPE atrophy (area of definitely decreased autofluorescence and the degree of photoreceptor impairment (outer nuclear layer thickness and pattern electroretinography amplitude).
Results: RT-PCR of mRNA from patient-derived photoreceptor precursor cells showed exon 40 and exon 39/40 deletion products, as well as the normal transcript. Quantification of products showed 52.4% normal and 47.6% mutant ABCA4 mRNA. Clinically, c.5714+5G>A patients displayed significantly better structural and functional preservation of photoreceptors (thicker outer nuclear layer, presence of tubulations, higher pattern electroretinography amplitude) than double null patients with similar degrees of RPE loss, whereas double null patients exhibited signs of extensive photoreceptor ,damage even in the areas with preserved RPE.
Conclusions: The prototypical STGD1 sequence of events of primary RPE and secondary photoreceptor damage is congruous with c.5714+5G>A, but not the double null genotype, which implies different and genotype-dependent disease mechanisms. We hypothesize that the relative photoreceptor sparing in c.5714+5G>A patients results from the remaining function of the ABCA4 transporter originating from the normally spliced product, possibly by decreasing the direct bisretinoid toxicity on photoreceptor membranes. -
Salomo, D., Lieven, E., & Tomasello, M. (2010). Young children's sensitivity to new and given information when answering predicate-focus questions. Applied Psycholinguistics, 31, 101-115. doi:10.1017/S014271640999018X.
Abstract
In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore new, element. During the last scene the experimenter asked the question (e.g., “What's the frog doing now?”). We found that children expressed the action and the patient in the patient-new condition but expressed only the action in the action-new condition. These results show that children are sensitive to both the predicate-focus question and newness in context. A further finding was that children expressed new patients in their answers more often when there was a verbal context prior to the questions than when there was not. -
San Roque, L., & Norcliffe, E. (2010). Knowledge asymmetries in grammar and interaction. In E. Norcliffe, & N. J. Enfield (
Eds. ), Field manual volume 13 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529153. -
Sander, J., Lieberman, A., & Rowland, C. F. (2023). Exploring joint attention in American Sign Language: The influence of sign familiarity. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (
Eds. ), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 632-638).Abstract
Children’s ability to share attention with another social partner (i.e., joint attention) has been found to support language development. Despite the large amount of research examining the effects of joint attention on language in hearing population, little is known about how deaf children learning sign languages achieve joint attention with their caregivers during natural social interaction and how caregivers provide and scaffold learning opportunities for their children. The present study investigates the properties and timing of joint attention surrounding familiar and novel naming events and their relationship to children’s vocabulary. Naturalistic play sessions of caretaker-child-dyads using American Sign Language were analyzed in regards to naming events of either familiar or novel object labeling events and the surrounding joint attention events. We observed that most naming events took place in the context of a successful joint attention event and that sign familiarity was related to the timing of naming events within the joint attention events. Our results suggest that caregivers are highly sensitive to their child’s visual attention in interactions and modulate joint attention differently in the context of naming events of familiar vs. novel object labels. -
Sauter, D. (2010). Can introspection teach us anything about the perception of sounds? [Book review]. Perception, 39, 1300-1302. doi:10.1068/p3909rvw.
Abstract
Reviews the book, Sounds and Perception: New Philosophical Essays edited by Matthew Nudds and Casey O'Callaghan (2010). This collection of thought-provoking philosophical essays contains chapters on particular aspects of sound perception, as well as a series of essays focusing on the issue of sound location. The chapters on specific topics include several perspectives on how we hear speech, one of the most well-studied aspects of auditory perception in empirical research. Most of the book consists of a series of essays approaching the experience of hearing sounds by focusing on where sounds are in space. An impressive range of opinions on this issue is presented, likely thanks to the fact that the book's editors represent dramatically different viewpoints. The wave based view argues that sounds are located near the perceiver, although the sounds also provide information about objects around the listener, including the source of the sound. In contrast, the source based view holds that sounds are experienced as near or at their sources. The editors acknowledge that additional methods should be used in conjunction with introspection, but they argue that theories of perceptual experience should nevertheless respect phenomenology. With such a range of views derived largely from the same introspective methodology, it remains unresolved which phenomenological account is to be respected. -
Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107(6), 2408-2412. doi:10.1073/pnas.0908239106.
Abstract
Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.Additional information
http://www.pnas.org/content/early/2010/01/11/0908239106/suppl/DCSupplemental -
Sauter, D. (2010). Are positive vocalizations perceived as communicating happiness across cultural boundaries? [Article addendum]. Communicative & Integrative Biology, 3(5), 440-442. doi:10.4161/cib.3.5.12209.
Abstract
Laughter communicates a feeling of enjoyment across cultures, while non-verbal vocalizations of several other positive emotions, such as achievement or sensual pleasure, are recognizable only within, but not across, cultural boundaries. Are these positive vocalizations nevertheless interpreted cross-culturally as signaling positive affect? In a match-to-sample task, positive emotional vocal stimuli were paired with positive and negative facial expressions, by English participants and members of the Himba, a semi-nomadic, culturally isolated Namibian group. The results showed that laughter was associated with a smiling facial expression across both groups, consistent with previous work showing that human laughter is a positive, social signal with deep evolutionary roots. However, non-verbal vocalizations of achievement, sensual pleasure, and relief were not cross-culturally associated with smiling facial expressions, perhaps indicating that these types of vocalizations are not cross-culturally interpreted as communicating a positive emotional state, or alternatively that these emotions are associated with positive facial expression other than smiling. These results are discussed in the context of positive emotional communication in vocal and facial signals. Research on the perception of non-verbal vocalizations of emotions across cultures demonstrates that some affective signals, including laughter, are associated with particular facial configurations and emotional states, supporting theories of emotions as a set of evolved functions that are shared by all humans regardless of cultural boundaries. -
Sauter, D. (2010). More than happy: The need for disentangling positive emotions. Current Directions in Psychological Science, 19, 36-40. doi:10.1177/0963721409359290.
Abstract
Despite great advances in scientific understanding of emotional processes in the last decades, research into the communication of emotions has been constrained by a strong bias toward negative affective states. Typically, studies distinguish between different negative emotions, such as disgust, sadness, anger, and fear. In contrast, most research uses only one category of positive affect, “happiness,” which is assumed to encompass all positive emotional states. This article reviews recent research showing that a number of positive affective states have discrete, recognizable signals. An increased focus on cues other than facial expressions is necessary to understand these positive states and how they are communicated; vocalizations, touch, and postural information offer promising avenues for investigating signals of positive affect. A full scientific understanding of the functions, signals, and mechanisms of emotions requires abandoning the unitary concept of happiness and instead disentangling positive emotions. -
Sauter, D. (2010). Non-verbal emotional vocalizations across cultures [Abstract]. In E. Zimmermann, & E. Altenmüller (
Eds. ), Evolution of emotional communication: From sounds in nonhuman mammals to speech and music in man (pp. 15). Hannover: University of Veterinary Medicine Hannover.Abstract
Despite differences in language, culture, and ecology, some human characteristics are similar in people all over the world, while other features vary from one group to the next. These similarities and differences can inform arguments about what aspects of the human mind are part of our shared biological heritage and which are predominantly products of culture and language. I will present data from a cross-cultural project investigating the recognition of non-verbal vocalizations of emotions, such as screams and laughs, across two highly different cultural groups. English participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognised. In contrast, a set of additional positive emotions was only recognised within, but not across, cultural boundaries. These results indicate that a number of primarily negative emotions are associated with vocalizations that can be recognised across cultures, while at least some positive emotions are communicated with culture-specific signals. I will discuss these findings in the context of accounts of emotions at differing levels of analysis, with an emphasis on the often-neglected positive emotions. -
Sauter, D., Eisner, F., Calder, A. J., & Scott, S. K. (2010). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63(11), 2251-2272. doi:10.1080/17470211003721642.
Abstract
Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotionally inflected speech (Banse & Scherer, 1996) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, 1971; Scott et al., 1997), and of positive affective states (Ekman, 1992, 2003; Sauter & Scott, 2007). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features. -
Sauter, D., & Eimer, M. (2010). Rapid detection of emotion from human vocalizations. Journal of Cognitive Neuroscience, 22, 474-481. doi:10.1162/jocn.2009.21215.
Abstract
The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals. -
Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Reply to Gewald: Isolated Himba settlements still exist in Kaokoland [Letter to the editor]. Proceedings of the National Academy of Sciences of the United States of America, 107(18), E76. doi:10.1073/pnas.1002264107.
Abstract
We agree with Gewald (1) that historical and anthropological accounts are essential tools for understanding the Himba culture, and these accounts are valuable to both us and him. However, we contest his claim that the Himba individuals in our study were not culturally isolated. Gewald (1) claims that it would be “unlikely” that the Himba people with whom we worked had “not been exposed to the affective signals of individuals from cultural groups other than their own” as stated in our paper (2). Gewald (1) seems to argue that, because outside groups have had contact with some Himba, this means that these events affected all Himba. Yet, the Himba constitute a group of 20,000-50,000 people (3) living in small settlements scattered across the vast Kaokoland region, an area of 49,000 km2 (4). -
Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. In C. Douilliez, & C. Humez (
Eds. ), Third European Conference on Emotion 2010. Proceedings (pp. 39-39). Lille: Université de Lille.Abstract
Many studies suggest that emotional signals can be recognized across cultures and modalities. But to what extent are these signals innate and to what extent are they learned? This study investigated whether auditory learning is necessary for the production of recognizable emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of eight congenitally deaf Dutch individuals, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25). Considerable variability was found across emotions, suggesting that auditory learning is more important for the acquisition of certain types of vocalizations than for others. In particular, achievement and surprise sounds were relatively poorly recognized. In contrast, amusement and disgust vocalizations were well recognized, suggesting that for some emotions, recognizable vocalizations can develop without any auditory learning. The implications of these results for models of emotional communication are discussed, and other routes of social learning available to the deaf individuals are considered. -
Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. Journal of the Acoustical Society of America, 128, 2476.
Abstract
Vocalizations like screams and laughs are used to communicate affective states, but what acoustic cues in these signals require vocal learning and which ones are innate? This study investigated the role of auditory learning in the production of non-verbal emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of congenitally deaf Dutch individuals and matched hearing controls, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25), and judgments were analyzed together with acoustic cues, including envelope, pitch, and spectral measures. Considerable variability was found across emotions and acoustic cues, and the two types of information were related for a sub-set of the emotion categories. These results suggest that auditory learning is less important for the acquisition of certain types of vocalizations than for others (particularly amusement and relief), and they also point to a less central role for auditory learning of some acoustic features in affective non-verbal vocalizations. The implications of these results for models of vocal emotional communication are discussed. -
Sauter, D., & Levinson, S. C. (2010). What's embodied in a smile? [Comment on Niedenthal et al.]. Behavioral and Brain Sciences, 33, 457-458. doi:10.1017/S0140525X10001597.
Abstract
Differentiation of the forms and functions of different smiles is needed, but they should be based on empirical data on distinctions that senders and receivers make, and the physical cues that are employed. Such data would allow for a test of whether smiles can be differentiated using perceptual cues alone or whether mimicry or simulation are necessary. -
Schäfer, M., & Haun, D. B. M. (2010). Sharing among children across cultures. In E. Norcliffe, & N. J. Enfield (
Eds. ), Field manual volume 13 (pp. 45-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529154. -
Scharenborg, O., & Boves, L. (2010). Computational modelling of spoken-word recognition processes: Design choices and evaluation. Pragmatics & Cognition, 18, 136-164. doi:10.1075/pc.18.1.06sch.
Abstract
Computational modelling has proven to be a valuable approach in developing theories of spoken-word processing. In this paper, we focus on a particular class of theories in which it is assumed that the spoken-word recognition process consists of two consecutive stages, with an 'abstract' discrete symbolic representation at the interface between the stages. In evaluating computational models, it is important to bring in independent arguments for the cognitive plausibility of the algorithms that are selected to compute the processes in a theory. This paper discusses the relation between behavioural studies, theories, and computational models of spoken-word recognition. We explain how computational models can be assessed in terms of the goodness of fit with the behavioural data and the cognitive plausibility of the algorithms. An in-depth analysis of several models provides insights into how computational modelling has led to improved theories and to a better understanding of the human spoken-word recognition process. -
Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.
Abstract
Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words. -
Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.
Abstract
Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent. -
Scheibel, M., & Indefrey, P. (2023). Top-down enhanced object recognition in blocking and priming paradigms. Journal of Experimental Psychology: Human Perception and Performance, 49(3), 327-354. doi:10.1037/xhp0001094.
Abstract
Previous studies have demonstrated that context manipulations by semantic blocking and category priming can, under particular design conditions, give rise to semantic facilitation effects. The interpretation of semantic facilitation effects is controversial in the word production literature; perceptual accounts propose that contextually facilitated object recognition may underlie facilitation effects. The present study tested this notion. We investigated the difficulty of object recognition in a semantic blocking and a category priming task. We presented all pictures in gradually de-blurring image sequences and measured the de-blurring level that first allowed for correct object naming as an indicator of the perceptual demands of object recognition. Based on object recognition models assuming a temporal progression from coarse- to fine-grained visual processing, we reasoned that the lower the required level of detail, the more efficient the recognition processes. The results demonstrate that categorically related contexts reduce the level of visual detail required for object naming compared to unrelated contexts, with this effect being most pronounced for shape-distinctive objects and in contexts providing explicit category cues. We propose a top-down explanation based on target predictability of the observed effects. Implications of the recognition effects based on target predictability for the interpretation of context effects observed in latencies are discussed.Additional information
Stimuli, Ratings, Analysis codes -
Scherz, M. D., Schmidt, R., Brown, J. L., Glos, J., Lattenkamp, E. Z., Rakotomalala, Z., Rakotoarison, A., Rakotonindrina, R. T., Randriamalala, O., Raselimanana, A. P., Rasolonjatovo, S. M., Ratsoavina, F. M., Razafindraibe, J. H., Glaw, F., & Vences, M. (2023). Repeated divergence of amphibians and reptiles across an elevational gradient in northern Madagascar. Ecology and Evolution, 13(3): e9914. doi:10.1002/ece3.9914.
Abstract
How environmental factors shape patterns of biotic diversity in tropical ecosystems is an active field of research, but studies examining the possibility of ecological speciation in terrestrial tropical ecosystems are scarce. We use the isolated rainforest herpetofauna on the Montagne d'Ambre (Amber Mountain) massif in northern Madagascar as a model to explore elevational divergence at the level of populations and communities. Based on intensive sampling and DNA barcoding of amphibians and reptiles along a transect ranging from ca. 470–1470 m above sea level (a.s.l.), we assessed a main peak in species richness at an elevation of ca. 1000 m a.s.l. with 41 species. The proportion of local endemics was highest (about 1/3) at elevations >1100 m a.s.l. Two species of chameleons (Brookesia tuberculata, Calumma linotum) and two species of frogs (Mantidactylus bellyi, M. ambony) studied in depth by newly developed microsatellite markers showed genetic divergence up the slope of the mountain, some quite strong, others very weak, but in each case with genetic breaks between 1100 and 1270 m a.s.l. Genetic clusters were found in transect sections significantly differing in bioclimate and herpetological community composition. A decrease in body size was detected in several species with increasing elevation. The studied rainforest amphibians and reptiles show concordant population genetic differentiation across elevation along with morphological and niche differentiation. Whether this parapatric or microallopatric differentiation will suffice for the completion of speciation is, however, unclear, and available phylogeographic evidence rather suggests that a complex interplay between ecological and allopatric divergence processes is involved in generating the extraordinary species diversity of Madagascar's biota. Our study reveals concordant patterns of diversification among main elevational bands, but suggests that these adaptational processes are only part of the complex of processes leading to species formation, among which geographical isolation is probably also important.Additional information
supplementary materials -
Schijven, D., Postema, M., Fukunaga, M., Matsumoto, J., Miura, K., De Zwarte, S. M., Van Haren, N. E. M., Cahn, W., Hulshoff Pol, H. E., Kahn, R. S., Ayesa-Arriola, R., Ortiz-García de la Foz, V., Tordesillas-Gutierrez, D., Vázquez-Bourgon, J., Crespo-Facorro, B., Alnæs, D., Dahl, A., Westlye, L. T., Agartz, I., Andreassen, O. A. and 129 moreSchijven, D., Postema, M., Fukunaga, M., Matsumoto, J., Miura, K., De Zwarte, S. M., Van Haren, N. E. M., Cahn, W., Hulshoff Pol, H. E., Kahn, R. S., Ayesa-Arriola, R., Ortiz-García de la Foz, V., Tordesillas-Gutierrez, D., Vázquez-Bourgon, J., Crespo-Facorro, B., Alnæs, D., Dahl, A., Westlye, L. T., Agartz, I., Andreassen, O. A., Jönsson, E. G., Kochunov, P., Bruggemann, J. M., Catts, S. V., Michie, P. T., Mowry, B. J., Quidé, Y., Rasser, P. E., Schall, U., Scott, R. J., Carr, V. J., Green, M. J., Henskens, F. A., Loughland, C. M., Pantelis, C., Weickert, C. S., Weickert, T. W., De Haan, L., Brosch, K., Pfarr, J.-K., Ringwald, K. G., Stein, F., Jansen, A., Kircher, T. T., Nenadić, I., Krämer, B., Gruber, O., Satterthwaite, T. D., Bustillo, J., Mathalon, D. H., Preda, A., Calhoun, V. D., Ford, J. M., Potkin, S. G., Chen, J., Tan, Y., Wang, Z., Xiang, H., Fan, F., Bernardoni, F., Ehrlich, S., Fuentes-Claramonte, P., Garcia-Leon, M. A., Guerrero-Pedraza, A., Salvador, R., Sarró, S., Pomarol-Clotet, E., Ciullo, V., Piras, F., Vecchio, D., Banaj, N., Spalletta, G., Michielse, S., Van Amelsvoort, T., Dickie, E. W., Voineskos, A. N., Sim, K., Ciufolini, S., Dazzan, P., Murray, R. M., Kim, W.-S., Chung, Y.-C., Andreou, C., Schmidt, A., Borgwardt, S., McIntosh, A. M., Whalley, H. C., Lawrie, S. M., Du Plessis, S., Luckhoff, H. K., Scheffler, F., Emsley, R., Grotegerd, D., Lencer, R., Dannlowski, U., Edmond, J. T., Rootes-Murdy, K., Stephen, J. M., Mayer, A. R., Antonucci, L. A., Fazio, L., Pergola, G., Bertolino, A., Díaz-Caneja, C. M., Janssen, J., Lois, N. G., Arango, C., Tomyshev, A. S., Lebedeva, I., Cervenka, S., Sellgren, C. M., Georgiadis, F., Kirschner, M., Kaiser, S., Hajek, T., Skoch, A., Spaniel, F., Kim, M., Kwak, Y. B., Oh, S., Kwon, J. S., James, A., Bakker, G., Knöchel, C., Stäblein, M., Oertel, V., Uhlmann, A., Howells, F. M., Stein, D. J., Temmingh, H. S., Diaz-Zuluaga, A. M., Pineda-Zapata, J. A., López-Jaramillo, C., Homan, S., Ji, E., Surbeck, W., Homan, P., Fisher, S. E., Franke, B., Glahn, D. C., Gur, R. C., Hashimoto, R., Jahanshad, N., Luders, E., Medland, S. E., Thompson, P. M., Turner, J. A., Van Erp, T. G., & Francks, C. (2023). Large-scale analysis of structural brain asymmetries in schizophrenia via the ENIGMA consortium. Proceedings of the National Academy of Sciences of the United States of America, 120(14): e2213880120. doi:10.1073/pnas.2213880120.
Abstract
Left–right asymmetry is an important organizing feature of the healthy brain that may be altered in schizophrenia, but most studies have used relatively small samples and heterogeneous approaches, resulting in equivocal findings. We carried out the largest case–control study of structural brain asymmetries in schizophrenia, with MRI data from 5,080 affected individuals and 6,015 controls across 46 datasets, using a single image analysis protocol. Asymmetry indexes were calculated for global and regional cortical thickness, surface area, and subcortical volume measures. Differences of asymmetry were calculated between affected individuals and controls per dataset, and effect sizes were meta-analyzed across datasets. Small average case–control differences were observed for thickness asymmetries of the rostral anterior cingulate and the middle temporal gyrus, both driven by thinner left-hemispheric cortices in schizophrenia. Analyses of these asymmetries with respect to the use of antipsychotic medication and other clinical variables did not show any significant associations. Assessment of age- and sex-specific effects revealed a stronger average leftward asymmetry of pallidum volume between older cases and controls. Case–control differences in a multivariate context were assessed in a subset of the data (N = 2,029), which revealed that 7% of the variance across all structural asymmetries was explained by case–control status. Subtle case–control differences of brain macrostructural asymmetry may reflect differences at the molecular, cytoarchitectonic, or circuit levels that have functional relevance for the disorder. Reduced left middle temporal cortical thickness is consistent with altered left-hemisphere language network organization in schizophrenia. -
Schmale, R., Cristia, A., Seidl, A., & Johnson, E. K. (2010). Developmental changes in infants’ ability to cope with dialect variation in word recognition. Infancy, 15, 650-662. doi:10.1111/j.1532-7078.2010.00032.x.
Abstract
Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid gains in vocabulary growth. One particular kind of variability that infants must accommodate is that of dialectal accent, because most children will encounter speakers from different regions and backgrounds. In this study, we explored developmental changes in infants’ ability to recognize words in continuous speech by familiarizing them with words spoken by a speaker of their own region (North Midland-American English) or a different region (Southern Ontario Canadian English), and testing them with passages spoken by a speaker of the opposite dialectal accent. Our results demonstrate that 12- but not 9-month-olds readily recognize words in the face of dialectal variation. -
Schuppler, B., Ernestus, M., Van Dommelen, W., & Koreman, J. (2010). Predicting human perception and ASR classification of word-final [t] by its acoustic sub-segmental properties. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2466-2469).
Abstract
This paper presents a study on the acoustic sub-segmental properties of word-final /t/ in conversational standard Dutch and how these properties contribute to whether humans and an ASR system classify the /t/ as acoustically present or absent. In general, humans and the ASR system use the same cues (presence of a constriction, a burst, and alveolar frication), but the ASR system is also less sensitive to fine cues (weak bursts, smoothly starting friction) than human listeners and misled by the presence of glottal vibration. These data inform the further development of models of human and automatic speech processing. -
Seijdel, N., Marshall, T. R., & Drijvers, L. (2023). Rapid invisible frequency tagging (RIFT): A promising technique to study neural and cognitive processing using naturalistic paradigms. Cerebral Cortex, 33(5), 1626-1629. doi:10.1093/cercor/bhac160.
Abstract
Frequency tagging has been successfully used to investigate selective stimulus processing in electroencephalography (EEG) or magnetoencephalography (MEG) studies. Recently, new projectors have been developed that allow for frequency tagging at higher frequencies (>60 Hz). This technique, rapid invisible frequency tagging (RIFT), provides two crucial advantages over low-frequency tagging as (i) it leaves low-frequency oscillations unperturbed, and thus open for investigation, and ii) it can render the tagging invisible, resulting in more naturalistic paradigms and a lack of participant awareness. The development of this technique has far-reaching implications as oscillations involved in cognitive processes can be investigated, and potentially manipulated, in a more naturalistic manner. -
Sekine, K. (2010). Change of perspective taking in preschool age: An analysis of spontaneous gestures. Tokyo: Kazama shobo.
-
Sekine, K., & Furuyama, N. (2010). Developmental change of discourse cohesion in speech and gestures among Japanese elementary school children. Rivista di psicolinguistica applicata, 10(3), 97-116. doi:10.1400/152613.
Abstract
This study investigates the development of bi-modal reference maintenance by focusing on how Japanese elementary school children introduce and track animate referents in their narratives. Sixty elementary school children participated in this study, 10 from each school year (from 7 to 12 years of age). They were instructed to remember a cartoon and retell the story to their parents. We found that although there were no differences in the speech indices among the different ages, the average scores for the gesture indices of the 12-year-olds were higher than those of the other age groups. In particular, the amount of referential gestures radically increased at 12, and these children tended to use referential gestures not only for tracking referents but also for introducing characters. These results indicate that the ability to maintain a reference to create coherent narratives increases at about age 12. -
Sekine, K. (2010). The role of gestures contributing to speech production in children. The Japanese Journal of Qualitative Psychology, 9, 115-132.
-
Sekine, K., & Kajikawa, T. (2023). Does the spatial distribution of a speaker's gaze and gesture impact on a listener's comprehension of discourse? In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (
Eds. ), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527208.Abstract
This study investigated the impact of a speaker's gaze direction
on a listener's comprehension of discourse. Previous research
suggests that hand gestures play a role in referent allocation,
enabling listeners to better understand the discourse. The
current study aims to determine whether the speaker's gaze
direction has a similar effect on reference resolution as co-
speech gestures. Thirty native Japanese speakers participated in
the study and were assigned to one of three conditions:
congruent, incongruent, or speech-only. Participants watched
36 videos of an actor narrating a story consisting of three
sentences with two protagonists. The speaker consistently
used hand gestures to allocate one protagonist to the lower right
and the other to the lower left space, while directing her gaze to
either space of the target person (congruent), the other person
(incongruent), or no particular space (speech-only). Participants
were required to verbally answer a question about the target
protagonist involved in an accidental event as quickly as
possible. Results indicate that participants in the congruent
condition exhibited faster reaction times than those in the
incongruent condition, although the difference was not
significant. These findings suggest that the speaker's gaze
direction is not enough to facilitate a listener's comprehension
of discourse. -
Senft, G. (2010). Culture change - language change: Missionaries and moribund varieties of Kilivila. In G. Senft (
Ed. ), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 69-95). Canberra: Pacific Linguistics. -
Senft, G. (
Ed. ). (2010). Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization. Canberra: Pacific Linguistics.Abstract
The contributions to this book concern the documentation, archiving and revitalization of endangered language materials. The anthology focuses mainly on endangered Oceanic languages, with articles on Vanuatu by Darrell Tryon and the Marquesas by Gabriele Cablitz, on situations of loss and gain by Ingjerd Hoem and on the Kilivila language of the Trobriands by the editor. Nick Thieberger, Peter Wittenburg and Paul Trilsbeek, and David Blundell and colleagues write about aspects of linguistic archiving. Under the rubric of revitalization, Margaret Florey and Michael Ewing write about Maluku, Jakelin Troy and Michael Walsh about Australian Aboriginal languages in southeastern Australia, whilst three articles, by Sophie Nock, Diana Johnson and Winifred Crombie concern the revitalization of Maori. -
Senft, G. (2010). Argonauten mit Außenbordmotoren - Feldforschung auf den Trobriand-Inseln (Papua-Neuguinea) seit 1982. Mitteilungen der Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte, 31, 115-130.
Abstract
Seit 1982 erforsche ich die Sprache und die Kultur der Trobriand-Insulaner in Papua-Neuguinea. Nach inzwischen 15 Reisen zu den Trobriand-Inseln, die sich bis heute zu nahezu vier Jahren Leben und Arbeit im Dorf Tauwema auf der Insel Kaile'una addieren, wurde ich von Markus Schindlbeck und Alix Hänsel dazu eingeladen, den Mitgliedern der „Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte“ über meine Feldforschungen zu berichten. Das werde ich im Folgenden tun. Zunächst beschreibe ich, wie ich zu den Trobriand-Inseln kam, wie ich mich dort zurechtgefunden habe und berichte dann, welche Art von Forschung ich all die Jahre betrieben, welche Formen von Sprach- und Kulturwandel ich dabei beobachtet und welche Erwartungen ich auf der Basis meiner bisherigen Erfahrungen für die Zukunft der Trobriander und für ihre Sprache und ihre Kultur habe. -
Senft, G. (2010). [Review of the book Consequences of contact: Language ideologies and sociocultural transformations in Pacific societies ed. by Miki Makihara and Bambi B. Schieffelin]. Paideuma. Mitteilungen zur Kulturkunde, 56, 308-313.
-
Senft, G. (2010). Introduction. In G. Senft (
Ed. ), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 1-13). Canberra: Pacific Linguistics. -
Senft, G. (2010). The Trobriand Islanders' ways of speaking. Berlin: De Gruyter.
Abstract
The book documents the Trobriand Islanders' typology of genres. Rooted in the 'ethnography of speaking/anthropological linguistics' paradigm, the author highlights the relevance of genres for researching language, culture and cognition in social interaction and the importance of understanding them for achieving linguistic and cultural competence. Data presented is accessible via the internet.
Share this page