Displaying 501 - 600 of 658
-
Rossi, G. (2020). Other-repetition in conversation across languages: Bringing prosody into pragmatic typology. Language in Society, 49(4), 495-520. doi:10.1017/S0047404520000251.
Abstract
In this article, I introduce the aims and scope of a project examining other-repetition in natural conversation. This introduction provides the conceptual and methodological background for the five language-specific studies contained in this special issue, focussing on other-repetition in English, Finnish, French, Italian, and Swedish. Other-repetition is a recurrent conversational phenomenon in which a speaker repeats all or part of what another speaker has just said, typically in the next turn. Our project focusses particularly on other-repetitions that problematise what is being repeated and typically solicit a response. Previous research has shown that such repetitions can accomplish a range of conversational actions. But how do speakers of different languages distinguish these actions? In addressing this question, we put at centre stage the resources of prosody—the nonlexical acoustic-auditory features of speech—and bring its systematic analysis into the growing field of pragmatic typology—the comparative study of language use and conversational structure. -
Rossi, G. (2020). The prosody of other-repetition in Italian: A system of tunes. Language in Society, 49(4), 619-652. doi:10.1017/S0047404520000627.
Abstract
As part of the project reported on in this special issue, the present study provides an overview of the types of action accomplished by other-repetition in Italian, with particular reference to the variety of the language spoken in the northeastern province of Trento. The analysis surveys actions within the domain of initiating repair, actions that extend beyond initiating repair, and actions that are alternative to initiating repair. Pitch contour emerges as a central design feature of other-repetition in Italian, with six nuclear contours associated with distinct types of action, sequential trajectories, and response patterns. The study also documents the interplay of pitch contour with other prosodic features (pitch span and register) and visible behavior (head nods, eyebrow movements).Additional information
Sound clips.zip -
Rowland, C. F., Theakston, A. L., Ambridge, B., & Twomey, K. E. (
Eds. ). (2020). Current Perspectives on Child Language Acquisition: How children use their environment to learn. Amsterdam: John Benjamins. doi:10.1075/tilar.27.Abstract
In recent years the field has seen an increasing realisation that the full complexity of language acquisition demands theories that (a) explain how children integrate information from multiple sources in the environment, (b) build linguistic representations at a number of different levels, and (c) learn how to combine these representations in order to communicate effectively. These new findings have stimulated new theoretical perspectives that are more centered on explaining learning as a complex dynamic interaction between the child and her environment. This book is the first attempt to bring some of these new perspectives together in one place. It is a collection of essays written by a group of researchers who all take an approach centered on child-environment interaction, and all of whom have been influenced by the work of Elena Lieven, to whom this collection is dedicated. -
Rowland, C. F. (2020). Introduction. In M. E. Poulsen (
Ed. ), The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics. -
Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.
Abstract
The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations. -
Rubio-Fernández, P., & Jara-Ettinger, J. (2020). Incrementality and efficiency shape pragmatics across languages. Proceedings of the National Academy of Sciences, 117, 13399-13404. doi:10.1073/pnas.1922067117.
Abstract
To correctly interpret a message, people must attend to the context in which it was produced. Here we investigate how this process, known as pragmatic reasoning, is guided by two universal forces in human communication: incrementality and efficiency, with speakers of all languages interpreting language incrementally and making the most efficient use of the incoming information. Crucially, however, the interplay between these two forces results in speakers of different languages having different pragmatic information available at each point in processing, including inferences about speaker intentions. In particular, the position of adjectives relative to nouns (e.g., “black lamp” vs. “lamp black”) makes visual context information available in reverse orders. In an eye-tracking study comparing four unrelated languages that have been understudied with regard to language processing (Catalan, Hindi, Hungarian, and Wolof), we show that speakers of languages with an adjective–noun order integrate context by first identifying properties (e.g., color, material, or size), whereas speakers of languages with a noun–adjective order integrate context by first identifying kinds (e.g., lamps or chairs). Most notably, this difference allows listeners of adjective–noun descriptions to infer the speaker’s intention when using an adjective (e.g., “the black…” as implying “not the blue one”) and anticipate the target referent, whereas listeners of noun–adjective descriptions are subject to temporary ambiguity when deriving the same interpretation. We conclude that incrementality and efficiency guide pragmatic reasoning across languages, with different word orders having different pragmatic affordances. -
Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.
Abstract
Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction. -
De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (
Eds. ), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).Abstract
In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines. -
De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.
Abstract
In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech. -
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (
Eds. ), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press. -
De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (
Ed. ), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer. -
Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.
Abstract
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation. -
Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.
Abstract
Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions. -
Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (
Eds. ), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.Abstract
The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes. -
Scharenborg, O., Ondel, L., Palaskar, S., Arthur, P., Ciannella, F., Du, M., Larsen, E., Merkx, D., Riad, R., Wang, L., Dupoux, E., Besacier, L., Black, A., Hasegawa-Johnson, M., Metze, F., Neubig, G., Stüker, S., Godard, P., & Müller, M. (2020). Speech technology for unwritten languages. IEEE/ACM Transactions on Audio, Speech and Language Processing, 28, 964-975. doi:10.1109/TASLP.2020.2973896.
Abstract
Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible. -
Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.
Abstract
Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute. -
Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.
Abstract
This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application. -
Scharenborg, O., ten Bosch, L., & Boves, L. (2007). Early decision making in continuous speech. In M. Grimm, & K. Kroschel (
Eds. ), Robust speech recognition and understanding (pp. 333-350). I-Tech Education and Publishing. -
Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.
Abstract
Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases. -
Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.
Abstract
The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields. -
Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.
Abstract
The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations. -
Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.
Abstract
Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability. -
Schijven, D., Stevelink, R., McCormack, M., van Rheenen, W., Luykx, J. J., Koeleman, B. P., Veldink, J. H., Project MinE ALS GWAS Consortium, & International League Against Epilepsy Consortium on Complex Epilepsies (2020). Analysis of shared common genetic risk between amyotrophic lateral sclerosis and epilepsy. Neurobiology of Aging, 92, 153.e1-153.e5. doi:10.1016/j.neurobiolaging.2020.04.011.
Abstract
Because hyper-excitability has been shown to be a shared pathophysiological mechanism, we used the latest and largest genome-wide studies in amyotrophic lateral sclerosis (n = 36,052) and epilepsy (n = 38,349) to determine genetic overlap between these conditions. First, we showed no significant genetic correlation, also when binned on minor allele frequency. Second, we confirmed the absence of polygenic overlap using genomic risk score analysis. Finally, we did not identify pleiotropic variants in meta-analyses of the 2 diseases. Our findings indicate that amyotrophic lateral sclerosis and epilepsy do not share common genetic risk, showing that hyper-excitability in both disorders has distinct origins.Additional information
1-s2.0-S0197458020301305-mmc1.docx -
Schijven, D., Veldink, J. H., & Luykx, J. J. (2020). Genetic cross-disorder analysis in psychiatry: from methodology to clinical utility. The British Journal of Psychiatry, 216(5), 246-249. doi:10.1192/bjp.2019.72.
Abstract
SummaryGenome-wide association studies have uncovered hundreds of loci associated with psychiatric disorders. Cross-disorder studies are among the prime ramifications of such research. Here, we discuss the methodology of the most widespread methods and their clinical utility with regard to diagnosis, prediction, disease aetiology and treatment in psychiatry.Declaration of interestNone. -
Schijven, D., Zinkstok, J. R., & Luykx, J. J. (2020). Van genetische bevindingen naar de klinische praktijk van de psychiater: Hoe genetica precisiepsychiatrie mogelijk kan maken. Tijdschrift voor Psychiatrie, 62(9), 776-783.
Files private
Request files -
Schoenmakers, G.-J. (2020). Freedom in the Dutch middle-field: Deriving discourse structure at the syntax-pragmatics interface. Glossa: a journal of general linguistics, 5(1): 114. doi:10.5334/gjgl.1307.
Abstract
This paper experimentally explores the optionality of Dutch scrambling structures with a definite object and an adverb. Most researchers argue that such structures are not freely interchangeable, but are subject to a strict discourse template. Existing analyses are based primarily on intuitions of the researchers, while experimental support is scarce. This paper reports on two experiments to gauge the existence of a strict discourse template. The discourse status of definite objects in scrambling clauses is first probed in a fill-in-the-blanks experiment and subsequently manipulated in a speeded judgment experiment. The results of these experiments indicate that scrambling is not as restricted as is commonly claimed. Although mismatches between surface order and pragmatic interpretation lead to a penalty in judgment rates and a rise in reaction times, they nonetheless occur in production and yield fully acceptable structures. Crucially, the penalties and delays emerge only in scrambling clauses with an adverb that is sensitive to focus placement. This paper argues that scrambling does not map onto discourse structure in the strict way proposed in most literature. Instead, a more complex syntax of deriving discourse relations is proposed which submits that the Dutch scrambling pattern results from two familiar processes which apply at the syntax-pragmatics interface: reconstruction and covert raising. -
Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
-
Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.
Abstract
We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12. -
Seidlmayer, E., Voß, J., Melnychuk, T., Galke, L., Tochtermann, K., Schultz, C., & Förstner, K. U. (2020). ORCID for Wikidata. Data enrichment for scientometric applications. In L.-A. Kaffee, O. Tifrea-Marciuska, E. Simperl, & D. Vrandečić (
Eds. ), Proceedings of the 1st Wikidata Workshop (Wikidata 2020). Aachen, Germany: CEUR Workshop Proceedings.Abstract
Due to its numerous bibliometric entries of scholarly articles and connected information Wikidata can serve as an open and rich
source for deep scientometrical analyses. However, there are currently certain limitations: While 31.5% of all Wikidata entries represent scientific articles, only 8.9% are entries describing a person and the number
of entries researcher is accordingly even lower. Another issue is the frequent absence of established relations between the scholarly article item and the author item although the author is already listed in Wikidata.
To fill this gap and to improve the content of Wikidata in general, we established a workflow for matching authors and scholarly publications by integrating data from the ORCID (Open Researcher and Contributor ID) database. By this approach we were able to extend Wikidata by more than 12k author-publication relations and the method can be
transferred to other enrichments based on ORCID data. This is extension is beneficial for Wikidata users performing bibliometrical analyses or using such metadata for other purposes. -
Seijdel, N., Tsakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2020). Depth in convolutional neural networks solves scene segmentation. PLOS Computational Biology, 16: e1008022. doi:10.1371/journal.pcbi.1008022.
Abstract
Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network. -
Seijdel, N., Jahfari, S., Groen, I. I. A., & Scholte, H. S. (2020). Low-level image statistics in natural scenes influence perceptual decision-making. Scientific Reports, 10: 10573. doi:10.1038/s41598-020-67661-8.
Abstract
A fundamental component of interacting with our environment is gathering and interpretation of sensory information. When investigating how perceptual information influences decision-making, most researchers have relied on manipulated or unnatural information as perceptual input, resulting in findings that may not generalize to real-world scenes. Unlike simplified, artificial stimuli, real-world scenes contain low-level regularities that are informative about the structural complexity, which the brain could exploit. In this study, participants performed an animal detection task on low, medium or high complexity scenes as determined by two biologically plausible natural scene statistics, contrast energy (CE) or spatial coherence (SC). In experiment 1, stimuli were sampled such that CE and SC both influenced scene complexity. Diffusion modelling showed that the speed of information processing was affected by low-level scene complexity. Experiment 2a/b refined these observations by showing how isolated manipulation of SC resulted in weaker but comparable effects, with an additional change in response boundary, whereas manipulation of only CE had no effect. Overall, performance was best for scenes with intermediate complexity. Our systematic definition quantifies how natural scene complexity interacts with decision-making. We speculate that CE and SC serve as an indication to adjust perceptual decision-making based on the complexity of the input. -
Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., & Ozyurek, A. (2020). Evidence for children's online integration of simultaneous information from speech and iconic gestures: An ERP study. Language, Cognition and Neuroscience, 35(10), 1283-1294. doi:10.1080/23273798.2020.1737719.
Abstract
Children perceive iconic gestures, along with speech they hear. Previous studies have shown
that children integrate information from both modalities. Yet it is not known whether children
can integrate both types of information simultaneously as soon as they are available as adults
do or processes them separately initially and integrate them later. Using electrophysiological
measures, we examined the online neurocognitive processing of gesture-speech integration in
6- to 7-year-old children. We focused on the N400 event-related potentials component which
is modulated by semantic integration load. Children watched video clips of matching or
mismatching gesture-speech combinations, which varied the semantic integration load. The
ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in
the matching condition. This finding provides the first neural evidence that by the ages of 6
or 7, children integrate multimodal semantic information in an online fashion comparable to
that of adults. -
Senft, G. (2007). Reference and 'référence dangereuse' to persons in Kilivila: An overview and a case study. In N. Enfield, & T. Stivers (
Eds. ), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 309-337). Cambridge: Cambridge University Press.Abstract
Based on the conversation analysts’ insights into the various forms of third person reference in English, this paper first presents the inventory of forms Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, offers its speakers for making such references. To illustrate such references to third persons in talk-in-interaction in Kilivila, a case study on gossiping is presented in the second part of the paper. This case study shows that ambiguous anaphoric references to two first mentioned third persons turn out to not only exceed and even violate the frame of a clearly defined situational-intentional variety of Kilivila that is constituted by the genre “gossip”, but also that these references are extremely dangerous for speakers in the Trobriand Islanders’ society. I illustrate how this culturally dangerous situation escalates and how other participants of the group of gossiping men try to “repair” this violation of the frame of a culturally defined and metalinguistically labelled “way of speaking”. The paper ends with some general remarks on how the understanding of forms of person reference in a language is dependent on the culture specific context in which they are produced. -
Senft, G. (2007). The Nijmegen space games: Studying the interrelationship between language, culture and cognition. In J. Wassmann, & K. Stockhaus (
Eds. ), Person, space and memory in the contemporary Pacific: Experiencing new worlds (pp. 224-244). New York: Berghahn Books.Abstract
One of the central aims of the "Cognitive Anthropology Research Group" (since 1998 the "Department of Language and Cognition of the MPI for Psycholinguistics") is to research the relationship between language, culture and cognition and the conceptualization of space in various languages and cultures. Ever since its foundation in 1991 the group has been developing methods to elicit cross-culturally and cross-linguistically comparable data for this research project. After a brief summary of the central considerations that served as guidelines for the developing of these elicitation devices, this paper first presents a broad selection of the "space games" developed and used for data elicitation in the groups' various fieldsites so far. The paper then discusses the advantages and shortcomings of these data elicitation devices. Finally, it is argued that methodologists developing such devices find themselves in a position somewhere between Scylla and Charybdis - at least, if they take the requirement seriously that the elicited data should be comparable not only cross-culturally but also cross-linguistically. -
Senft, G. (2020). “.. to grasp the native's point of view..” — A plea for a holistic documentation of the Trobriand Islanders' language, culture and cognition. Russian Journal of Linguistics, 24(1), 7-30. doi:10.22363/2687-0088-2020-24-1-7-30.
Abstract
In his famous introduction to his monograph “Argonauts of the Western Pacific” Bronislaw
Malinowski (1922: 24f.) points out that a “collection of ethnographic statements, characteristic
narratives, typical utterances, items of folk-lore and magical formulae has to be given as a corpus
inscriptionum, as documents of native mentality”. This is one of the prerequisites to “grasp the
native's point of view, his relation to life, to realize his vision of his world”. Malinowski managed
to document a “Corpus Inscriptionum Agriculturae Quriviniensis” in his second volume of “Coral
Gardens and their Magic” (1935 Vol II: 79-342). But he himself did not manage to come up with a
holistic corpus inscriptionum for the Trobriand Islanders. One of the main aims I have been pursuing
in my research on the Trobriand Islanders' language, culture, and cognition has been to fill this
ethnolinguistic niche. In this essay, I report what I had to do to carry out this complex and ambitious
project, what forms and kinds of linguistic and cultural competence I had to acquire, and how I
planned my data collection during 16 long- and short-term field trips to the Trobriand Islands
between 1982 and 2012. The paper ends with a critical assessment of my Trobriand endeavor. -
Senft, G. (2020). Kampfschild - vayola. In T. Brüderlin, S. Schien, & S. Stoll (
Eds. ), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch (pp. 58-59). Freiburg: Michael Imhof Verlag.Additional information
Picture -
Senft, G. (2020). 32 Kampfschild - dance or war shield - vayola. In T. Brüderlin, & S. Stoll (
Eds. ), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch. Texte zur Ausstellung, Städtische Museen Freiburg, vom 20. Juni 2020 bis 10. Januar 2021 (pp. 76-77). Freiburg: Städtische Museen. -
Senft, G. (2007). "Ich weiß nicht, was soll es bedeuten.." - Ethnolinguistische Winke zur Rolle von umfassenden Metadaten bei der (und für die) Arbeit mit Corpora. In W. Kallmeyer, & G. Zifonun (
Eds. ), Sprachkorpora - Datenmengen und Erkenntnisfortschritt (pp. 152-168). Berlin: Walter de Gruyter.Abstract
Arbeitet man als muttersprachlicher Sprecher des Deutschen mit Corpora gesprochener oder geschriebener deutscher Sprache, dann reflektiert man in aller Regel nur selten über die Vielzahl von kulturspezifischen Informationen, die in solchen Texten kodifiziert sind – vor allem, wenn es sich bei diesen Daten um Texte aus der Gegenwart handelt. In den meisten Fällen hat man nämlich keinerlei Probleme mit dem in den Daten präsupponierten und als allgemein bekannt erachteten Hintergrundswissen. Betrachtet man dagegen Daten in Corpora, die andere – vor allem nicht-indoeuropäische – Sprachen dokumentieren, dann wird einem schnell bewußt, wieviel an kulturspezifischem Wissen nötig ist, um diese Daten adäquat zu verstehen. In meinem Vortrag illustriere ich diese Beobachtung an einem Beispiel aus meinem Corpus des Kilivila, der austronesischen Sprache der Trobriand-Insulaner von Papua-Neuguinea. Anhand eines kurzen Auschnitts einer insgesamt etwa 26 Minuten dauernden Dokumentation, worüber und wie sechs Trobriander miteinander tratschen und klatschen, zeige ich, was ein Hörer oder Leser eines solchen kurzen Daten-Ausschnitts wissen muß, um nicht nur dem Gespräch überhaupt folgen zu können, sondern auch um zu verstehen, was dabei abläuft und wieso ein auf den ersten Blick absolut alltägliches Gespräch plötzlich für einen Trobriander ungeheuer an Brisanz und Bedeutung gewinnt. Vor dem Hintergrund dieses Beispiels weise ich dann zum Schluß meines Beitrags darauf hin, wie unbedingt nötig und erforderlich es ist, in allen Corpora bei der Erschließung und Kommentierung von Datenmaterialien durch sogenannte Metadaten solche kulturspezifischen Informationen explizit zu machen. -
Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
-
Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
-
Senft, G. (2007). Language, culture and cognition: Frames of spatial reference and why we need ontologies of space [Abstract]. In A. G. Cohn, C. Freksa, & B. Bebel (
Eds. ), Spatial cognition: Specialization and integration (pp. 12).Abstract
One of the many results of the "Space" research project conducted at the MPI for Psycholinguistics is that there are three "Frames of spatial Reference" (FoRs), the relative, the intrinsic and the absolute FoR. Cross-linguistic research showed that speakers who prefer one FoR in verbal spatial references rely on a comparable coding system for memorizing spatial configurations and for making inferences with respect to these spatial configurations in non-verbal problem solving. Moreover, research results also revealed that in some languages these verbal FoRs also influence gestural behavior. These results document the close interrelationship between language, culture and cognition in the domain "Space". The proper description of these interrelationships in the spatial domain requires language and culture specific ontologies. -
Senft, G. (2007). Nominal classification. In D. Geeraerts, & H. Cuyckens (
Eds. ), The Oxford handbook of cognitive linguistics (pp. 676-696). Oxford: Oxford University Press.Abstract
This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics. -
Senft, G., Majid, A., & Levinson, S. C. (2007). The language of taste. In A. Majid (
Ed. ), Field Manual Volume 10 (pp. 42-45). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492913. -
Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
-
Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.
Abstract
Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual. -
Sharoh, D. (2020). Advances in layer specific fMRI for the study of language, cognition and directed brain networks. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Sharpe, V., Weber, K., & Kuperberg, G. R. (2020). Impairments in probabilistic prediction and Bayesian learning can explain reduced neural semantic priming in schizophrenia. Schizophrenia Bulletin, 46(6), 1558-1566. doi:10.1093/schbul/sbaa069.
Abstract
It has been proposed that abnormalities in probabilistic prediction and dynamic belief updating explain the multiple features of schizophrenia. Here, we used electroencephalography (EEG) to ask whether these abnormalities can account for the well-established reduction in semantic priming observed in schizophrenia under nonautomatic conditions. We isolated predictive contributions to the neural semantic priming effect by manipulating the prime’s predictive validity and minimizing retroactive semantic matching mechanisms. We additionally examined the link between prediction and learning using a Bayesian model that probed dynamic belief updating as participants adapted to the increase in predictive validity. We found that patients were less likely than healthy controls to use the prime to predictively facilitate semantic processing on the target, resulting in a reduced N400 effect. Moreover, the trial-by-trial output of our Bayesian computational model explained between-group differences in trial-by-trial N400 amplitudes as participants transitioned from conditions of lower to higher predictive validity. These findings suggest that, compared with healthy controls, people with schizophrenia are less able to mobilize predictive mechanisms to facilitate processing at the earliest stages of accessing the meanings of incoming words. This deficit may be linked to a failure to adapt to changes in the broader environment. This reciprocal relationship between impairments in probabilistic prediction and Bayesian learning/adaptation may drive a vicious cycle that maintains cognitive disturbances in schizophrenia.Additional information
supplementary material -
Shen, C., & Janse, E. (2020). Maximum speech performance and executive control in young adult speakers. Journal of Speech, Language, and Hearing Research, 63, 3611-3627. doi:10.1044/2020_JSLHR-19-00257.
Abstract
Purpose
This study investigated whether maximum speech performance, more specifically, the ability to rapidly alternate between similar syllables during speech production, is associated with executive control abilities in a nonclinical young adult population.
Method
Seventy-eight young adult participants completed two speech tasks, both operationalized as maximum performance tasks, to index their articulatory control: a diadochokinetic (DDK) task with nonword and real-word syllable sequences and a tongue-twister task. Additionally, participants completed three cognitive tasks, each covering one element of executive control (a Flanker interference task to index inhibitory control, a letter–number switching task to index cognitive switching, and an operation span task to index updating of working memory). Linear mixed-effects models were fitted to investigate how well maximum speech performance measures can be predicted by elements of executive control.
Results
Participants' cognitive switching ability was associated with their accuracy in both the DDK and tongue-twister speech tasks. Additionally, nonword DDK accuracy was more strongly associated with executive control than real-word DDK accuracy (which has to be interpreted with caution). None of the executive control abilities related to the maximum rates at which participants performed the two speech tasks.
Conclusion
These results underscore the association between maximum speech performance and executive control (cognitive switching in particular). -
Shin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C. and 49 moreShin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C., Li, S., Yang, Q., Luciano, M., Karama, S., Lewis, L., Bastin, M. E., Harris, M. A., Wardlaw, J. M., Deary, I. E., Scholz, M., Loeffler, M., Witte, A. V., Beyer, F., Villringer, A., Armstrong, N. F., Mather, K. A., Ames, D., Jiang, J., Kwok, J. B., Schofield, P. R., Thalamuthu, A., Trollor, J. N., Wright, M. J., Brodaty, H., Wen, W., Sachdev, P. S., Terzikhan, N., Evans, T. E., Adams, H. H. H. H., Ikram, M. A., Frenzel, S., Van der Auwera-Palitschka, S., Wittfeld, K., Bülow, R., Grabe, H. J., Tzourio, C., Mishra, A., Maingault, S., Debette, S., Gillespie, N. A., Franz, C. E., Kremen, W. S., Ding, L., Jahanshad, N., the ENIGMA Consortium, Sestan, N., Pausova, Z., Seshadri, S., Paus, T., & the neuroCHARGE Working Group (2020). Global and regional development of the human cerebral cortex: Molecular acrchitecture and occupational aptitudes. Cerebral Cortex, 30(7), 4121-4139. doi:10.1093/cercor/bhaa035.
Abstract
We have carried out meta-analyses of genome-wide association studies (GWAS) (n = 23 784) of the first two principal components (PCs) that group together cortical regions with shared variance in their surface area. PC1 (global) captured variations of most regions, whereas PC2 (visual) was specific to the primary and secondary visual cortices. We identified a total of 18 (PC1) and 17 (PC2) independent loci, which were replicated in another 25 746 individuals. The loci of the global PC1 included those associated previously with intracranial volume and/or general cognitive function, such as MAPT and IGF2BP1. The loci of the visual PC2 included DAAM1, a key player in the planar-cell-polarity pathway. We then tested associations with occupational aptitudes and, as predicted, found that the global PC1 was associated with General Learning Ability, and the visual PC2 was associated with the Form Perception aptitude. These results suggest that interindividual variations in global and regional development of the human cerebral cortex (and its molecular architecture) cascade—albeit in a very limited manner—to behaviors as complex as the choice of one’s occupation. -
Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.
Abstract
In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings. -
Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
-
Slonimska, A., Ozyurek, A., & Capirci, O. (2020). The role of iconicity and simultaneity for efficient communication: The case of Italian Sign Language (LIS). Cognition, 200: 104246. doi:10.1016/j.cognition.2020.104246.
Abstract
A fundamental assumption about language is that, regardless of language modality, it faces the linearization problem, i.e., an event that occurs simultaneously in the world has to be split in language to be organized on a temporal scale. However, the visual modality of signed languages allows its users not only to express meaning in a linear manner but also to use iconicity and multiple articulators together to encode information simultaneously. Accordingly, in cases when it is necessary to encode informatively rich events, signers can take advantage of simultaneous encoding in order to represent information about different referents and their actions simultaneously. This in turn would lead to more iconic and direct representation. Up to now, there has been no experimental study focusing on simultaneous encoding of information in signed languages and its possible advantage for efficient communication. In the present study, we assessed how many information units can be encoded simultaneously in Italian Sign Language (LIS) and whether the amount of simultaneously encoded information varies based on the amount of information that is required to be expressed. Twenty-three deaf adults participated in a director-matcher game in which they described 30 images of events that varied in amount of information they contained. Results revealed that as the information that had to be encoded increased, signers also increased use of multiple articulators to encode different information (i.e., kinematic simultaneity) and density of simultaneously encoded information in their production. Present findings show how the fundamental properties of signed languages, i.e., iconicity and simultaneity, are used for the purpose of efficient information encoding in Italian Sign Language (LIS).Additional information
Supplementary data -
Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.
Abstract
Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners. -
Snijders, T. M., Benders, T., & Fikkert, P. (2020). Infants segment words from songs - an EEG study. Brain Sciences, 10( 1): 39. doi:10.3390/brainsci10010039.
Abstract
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech. -
Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.
Abstract
Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species. -
Sønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V. and 133 moreSønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V., Caspers, S., Cavalleri, G., Chen, C.-H., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dale, A. M., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivières, S., Donohoe, G., Draganski, B., Ehrlich, S., Espeseth, T., Fisher, S. E., Franke, B., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H., Groenewold, N. A., Haavik, J., Håberg, A., Hashimoto, R., Hehir-Kwa, J. Y., Heinz, A., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff, H. E., Ikeda, M., Jahanshad, N., Jernigan, T., Jockwitz, C., Johansson, S., Jonsdottir, G. A., Jönsson, E. G., Kahn, R., Kaufmann, T., Kelly, S., Kikuchi, M., Knowles, E. E. M., Kolskår, K. K., Kwok, J. B., Le Hellard, S., Leu, C., Liu, J., Lundervold, A. J., Lundervold, A., Martin, N. G., Mather, K., Mathias, S. R., McCormack, M., McMahon, K. L., McRae, A., Milaneschi, Y., Moreau, C., Morris, D., Mothersill, D., Mühleisen, T. W., Murray, R., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R., Paus, T., Pausova, Z., Penninx, B., Peralta, J. M., Pike, B., Prieto, C., Pudas, S., Quinlan, E., Quintana, D. S., Reinbold, C. S., Reis Marques, T., Reymond, A., Richard, G., Rodriguez-Herreros, B., Roiz-Santiañez, R., Rokicki, J., Rucker, J., Sachdev, P., Sanders, A.-M., Sando, S. B., Schmaal, L., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Sisodiya, S., Steen, V. M., Stein, D. J., Steinberg, S., Strike, L., Teumer, A., Thalamuthu, A., Tordesillas-Gutierrez, D., Turner, J., Ueland, T., Uhlmann, A., Ulfarsson, M. O., Van 't Ent, D., Van der Meer, D., Van Haren, N. E. M., Vaskinn, A., Vassos, E., Walters, G. B., Wang, Y., Wen, W., Whelan, C. D., Wittfeld, K., Wright, M., Yamamori, H., Zayats, T., Agartz, I., Westlye, L. T., Jacquemont, S., Djurovic, S., Stefansson, H., Stefansson, K., Thompson, P., & Andreassen, O. A. (2020). Dose response of the 16p11.2 distal copy number variant on intracranial volume and basal ganglia. Molecular Psychiatry, 25, 584-602. doi:10.1038/s41380-018-0118-1.
Abstract
Carriers of large recurrent copy number variants (CNVs) have a higher risk of developing neurodevelopmental disorders. The 16p11.2 distal CNV predisposes carriers to e.g., autism spectrum disorder and schizophrenia. We compared subcortical brain volumes of 12 16p11.2 distal deletion and 12 duplication carriers to 6882 non-carriers from the large-scale brain Magnetic Resonance Imaging collaboration, ENIGMA-CNV. After stringent CNV calling procedures, and standardized FreeSurfer image analysis, we found negative dose-response associations with copy number on intracranial volume and on regional caudate, pallidum and putamen volumes (β = −0.71 to −1.37; P < 0.0005). In an independent sample, consistent results were obtained, with significant effects in the pallidum (β = −0.95, P = 0.0042). The two data sets combined showed significant negative dose-response for the accumbens, caudate, pallidum, putamen and ICV (P = 0.0032, 8.9 × 10−6, 1.7 × 10−9, 3.5 × 10−12 and 1.0 × 10−4, respectively). Full scale IQ was lower in both deletion and duplication carriers compared to non-carriers. This is the first brain MRI study of the impact of the 16p11.2 distal CNV, and we demonstrate a specific effect on subcortical brain structures, suggesting a neuropathological pattern underlying the neurodevelopmental syndromes -
Speed, L. J., & Majid, A. (2020). Grounding language in the neglected senses of touch, taste, and smell. Cognitive Neuropsychology, 37(5-6), 363-392. doi:10.1080/02643294.2019.1623188.
Abstract
Grounded theories hold sensorimotor activation is critical to language processing. Such theories have focused predominantly on the dominant senses of sight and hearing. Relatively fewer studies have assessed mental simulation within touch, taste, and smell, even though they are critically implicated in communication for important domains, such as health and wellbeing. We review work that sheds light on whether perceptual activation from lesser studied modalities contribute to meaning in language. We critically evaluate data from behavioural, imaging, and cross-cultural studies. We conclude that evidence for sensorimotor simulation in touch, taste, and smell is weak. Comprehending language related to these senses may instead rely on simulation of emotion, as well as crossmodal simulation of the “higher” senses of vision and audition. Overall, the data suggest the need for a refinement of embodiment theories, as not all sensory modalities provide equally strong evidence for mental simulation. -
Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.
Abstract
Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes. -
Stevens, M. E. (2007). Perceptual adaptation to phonological differences between language varieties. PhD Thesis, University of Ghent, Ghent.
-
Stevens, M. A., McQueen, J. M., & Hartsuiker, R. J. (2007). No lexically-driven perceptual adjustments of the [x]-[h] boundary. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1897-1900). Dudweiler: Pirrot.Abstract
Listeners can make perceptual adjustments to phoneme categories in response to a talker who consistently produces a specific phoneme ambiguously. We investigate here whether this type of perceptual learning is also used to adapt to regional accent differences. Listeners were exposed to words produced by a Flemish talker whose realization of [x℄or [h℄ was ambiguous (producing [x℄like [h℄is a property of the West-Flanders regional accent). Before and after exposure they categorized a [x℄-[h℄continuum. For both Dutch and Flemish listeners there was no shift of the categorization boundary after exposure to ambiguous sounds in [x℄- or [h℄-biasing contexts. The absence of a lexically-driven learning effect for this contrast may be because [h℄is strongly influenced by coarticulation. As is not stable across contexts, it may be futile to adapt its representation when new realizations are heard -
Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
-
Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.
Abstract
Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning. -
Stivers, T., Enfield, N. J., & Levinson, S. C. (2007). Person reference in interaction. In N. J. Enfield, & T. Stivers (
Eds. ), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 1-20). Cambridge: Cambridge University Press. -
Stivers, T. (2007). Prescribing under pressure: Parent-physician conversations and antibiotics. Oxford: Oxford University Press.
Abstract
This book examines parent-physician conversations in detail, showing how parents put pressure on doctors in largely covert ways, for instance in specific communication practices for explaining why they have brought their child to the doctor or answering a history-taking question. This book also shows how physicians yield to this seemingly subtle pressure evidencing that apparently small differences in wording have important consequences for diagnosis and treatment recommendations. Following parents use of these interactional practices, physicians are more likely to make concessions, alter their diagnosis or alter their treatment recommendation. This book also shows how small changes in the way physicians present their findings and recommendations can decrease parent pressure for antibiotics. This book carefully documents the important and observable link between micro social interaction and macro public health domains. -
Stivers, T. (2007). Alternative recognitionals in person reference. In N. Enfield, & T. Stivers (
Eds. ), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 73-96). Cambridge: Cambridge University Press. -
Sumer, B., & Ozyurek, A. (2020). No effects of modality in development of locative expressions of space in signing and speaking children. Journal of Child Language, 47(6), 1101-1131. doi:10.1017/S0305000919000928.
Abstract
Linguistic expressions of locative spatial relations in sign languages are mostly visually- motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (3;5-9;11 years). Unlike previous reports suggesting a boosting effect of iconicity, and / or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children. -
Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.
Abstract
In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word. -
Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.
Abstract
In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development. -
Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.
Abstract
Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations. -
Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.
Abstract
This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia. -
Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.
Abstract
Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors. -
Ten Oever, S., Meierdierks, T., Duecker, F., De Graaf, T., & Sack, A. (2020). Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination. iScience, 23(7): 101282. doi:10.1016/j.isci.2020.101282.
Abstract
Low-frequency oscillations are proposed to be involved in separating neuronal representations belonging to different items. Although item-specific neuronal activity was found to cluster on different oscillatory phases, the influence of this mechanism on perception is unknown. Here, we investigated the perceptual consequences of neuronal item separation through oscillatory clustering. In an electroencephalographic experiment, participants categorized sounds parametrically varying in pitch, relative to an arbitrary pitch boundary. Pre-stimulus theta and alpha phase biased near-boundary sound categorization to one category or the other. Phase also modulated whether evoked neuronal responses contributed stronger to the fit of the sound envelope of one or another category. Intriguingly, participants with stronger oscillatory clustering (phase strongly biasing sound categorization) in the theta, but not alpha, range had steeper perceptual psychometric slopes (sharper sound category discrimination). These results indicate that neuronal sorting by phase directly influences subsequent perception and has a positive impact on discrimination performanceAdditional information
Supplemental Information -
Ten Oever, S., De Weerd, P., & Sack, A. T. (2020). Phase-dependent amplification of working memory content and performance. Nature Communications, 11: 1832. doi:10.1038/s41467-020-15629-7.
Abstract
Successful working memory performance has been related to oscillatory mechanisms operating in low-frequency ranges. Yet, their mechanistic interaction with the distributed neural activity patterns representing the content of the memorized information remains unclear. Here, we record EEG during a working memory retention interval, while a task-irrelevant, high-intensity visual impulse stimulus is presented to boost the read-out of distributed neural activity related to the content held in working memory. Decoding of this activity with a linear classifier reveals significant modulations of classification accuracy by oscillatory phase in the theta/alpha ranges at the moment of impulse presentation. Additionally, behavioral accuracy is highest at the phases showing maximized decoding accuracy. At those phases, behavioral accuracy is higher in trials with the impulse compared to no-impulse trials. This constitutes the first evidence in humans that working memory information is maximized within limited phase ranges, and that phase-selective, sensory impulse stimulation can improve working memory. -
Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.
Abstract
The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory. -
Teng, X., Ma, M., Yang, J., Blohm, S., Cai, Q., & Tian, X. (2020). Constrained structure of ancient Chinese poetry facilitates speech content grouping. Current Biology, 30, 1299-1305. doi:10.1016/j.cub.2020.01.059.
Abstract
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.Additional information
Supplemental Information -
Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. In Proceedings of the 7th GESPIN - Gesture and Speech in Interaction Conference. Stockholm: KTH Royal Institute of Technology.
Abstract
In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation.
In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of.
Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages. -
Ter Hark, S. E., Jamain, S., Schijven, D., Lin, B. D., Bakker, M. K., Boland-Auge, A., Deleuze, J.-F., Troudet, R., Malhotra, A. K., Gülöksüz, S., Vinkers, C. H., Ebdrup, B. H., Kahn, R. S., Leboyer, M., & Luykx, J. J. (2020). A new genetic locus for antipsychotic-induced weight gain: A genome-wide study of first-episode psychosis patients using amisulpride (from the OPTiMiSE cohort). Journal of Psychopharmacology, 34(5), 524-531. doi:10.1177/0269881120907972.
Abstract
Background:Antipsychotic-induced weight gain is a common and debilitating side effect of antipsychotics. Although genome-wide association studies of antipsychotic-induced weight gain have been performed, few genome-wide loci have been discovered. Moreover, these genome-wide association studies have included a wide variety of antipsychotic compounds.Aims:We aim to gain more insight in the genomic loci affecting antipsychotic-induced weight gain. Given the variable pharmacological properties of antipsychotics, we hypothesized that targeting a single antipsychotic compound would provide new clues about genomic loci affecting antipsychotic-induced weight gain.Methods:All subjects included for this genome-wide association study (n=339) were first-episode schizophrenia spectrum disorder patients treated with amisulpride and were minimally medicated (defined as antipsychotic use <2?weeks in the previous year and/or <6?weeks lifetime). Weight gain was defined as the increase in body mass index from before until approximately 1 month after amisulpride treatment.Results:Our genome-wide association analyses for antipsychotic-induced weight gain yielded one genome-wide significant hit (rs78310016; ?=1.05; p=3.66 ? 10?08; n=206) in a locus not previously associated with antipsychotic-induced weight gain or body mass index. Minor allele carriers had an odds ratio of 3.98 (p=1.0 ? 10?03) for clinically meaningful antipsychotic-induced weight gain (?7% of baseline weight). In silico analysis elucidated a chromatin interaction with 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1. In an attempt to replicate single-nucleotide polymorphisms previously associated with antipsychotic-induced weight gain, we found none were associated with amisulpride-induced weight gain.Conclusion:Our findings suggest the involvement of rs78310016 and possibly 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1 in antipsychotic-induced weight gain. In line with the unique binding profile of this atypical antipsychotic, our findings furthermore hint that biological mechanisms underlying amisulpride-induced weight gain differ from antipsychotic-induced weight gain by other atypical antipsychotics.Additional information
Supplementary_Figures_and_Tables_Optimise_GWAS.pdf -
Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.
Abstract
Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.
Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.
Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.
Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control. -
Terporten, R. (2020). The power of context: How linguistic contextual information shapes brain dynamics during sentence processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Terrill, A. (2007). [Review of ‘Andrew Pawley, Robert Attenborough, Jack Golson, and Robin Hide, eds. 2005. Papuan pasts: Cultural, linguistic and biological histories of Papuan-speaking people]. Oceanic Linguistics, 46(1), 313-321. doi:10.1353/ol.2007.0025.
-
Thompson, B., Raviv, L., & Kirby, S. (2020). Complexity can be maintained in small populations: A model of lexical variability in emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 440-442). Nijmegen: The Evolution of Language Conferences. -
Thompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A. and 151 moreThompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A., Frodl, T., Garijo, D., Gil, Y., Grabe, H. J., Grasby, K. L., Hajek, T., Han, L. K. M., Hatton, S. N., Hilbert, K., Ho, T. C., Holleran, L., Homuth, G., Hosten, N., Houenou, J., Ivanov, I., Jia, T., Kelly, S., Klein, M., Kwon, J. S., Laansma, M. A., Leerssen, J., Lueken, U., Nunes, A., O'Neill, J., Opel, N., Piras, F., Piras, F., Postema, M., Pozzi, E., Shatokhina, N., Soriano-Mas, C., Spalletta, G., Sun, D., Teumer, A., Tilot, A. K., Tozzi, L., Van der Merwe, C., Van Someren, E. J. W., Van Wingen, G. A., Völzke, H., Walton, E., Wang, L., Winkler, A. M., Wittfeld, K., Wright, M. J., Yun, J.-Y., Zhang, G., Zhang-James, Y., Adhikari, B. M., Agartz, I., Aghajani, M., Aleman, A., Althoff, R. R., Altmann, A., Andreassen, O. A., Baron, D. A., Bartnik-Olson, B. L., Bas-Hoogendam, J. M., Baskin-Sommers, A. R., Bearden, C. E., Berner, L. A., Boedhoe, P. S. W., Brouwer, R. M., Buitelaar, J. K., Caeyenberghs, K., Cecil, C. A. M., Cohen, R. A., Cole, J. H., Conrod, P. J., De Brito, S. A., De Zwarte, S. M. C., Dennis, E. L., Desrivieres, S., Dima, D., Ehrlich, S., Esopenko, C., Fairchild, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Garavan, H. P., Glahn, D. C., Groenewold, N. A., Gurholt, T. P., Gutman, B. A., Hahn, T., Harding, I. H., Hernaus, D., Hibar, D. P., Hillary, F. G., Hoogman, M., Hulshoff Pol, H. E., Jalbrzikowski, M., Karkashadze, G. A., Klapwijk, E. T., Knickmeyer, R. C., Kochunov, P., Koerte, I. K., Kong, X., Liew, S.-L., Lin, A. P., Logue, M. W., Luders, E., Macciardi, F., Mackey, S., Mayer, A. R., McDonald, C. R., McMahon, A. B., Medland, S. E., Modinos, G., Morey, R. A., Mueller, S. C., Mukherjee, P., Namazova-Baranova, L., Nir, T. M., Olsen, A., Paschou, P., Pine, D. S., Pizzagalli, F., Rentería, M. E., Rohrer, J. D., Sämann, P. G., Schmaal, L., Schumann, G., Shiroishi, M. S., Sisodiya, S. M., Smit, D. J. A., Sønderby, I. E., Stein, D. J., Stein, J. L., Tahmasian, M., Tate, D. F., Turner, J. A., Van den Heuvel, O. A., Van der Wee, N. J. A., Van der Werf, Y. D., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Velzen, L. S., Veer, I. M., Veltman, D. J., Villalon-Reina, J. E., Walter, H., Whelan, C. D., Wilde, E. A., Zarei, M., Zelman, V., & Enigma Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1): 100. doi:10.1038/s41398-020-0705-1.
Abstract
This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of “big data” (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA’s activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors.Additional information
41398_2020_705_MOESM1_ESM.pdf -
Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations [version 2; peer review: 3 approved]. Wellcome Open Research, 4: 142. doi:10.12688/wellcomeopenres.15396.2.
Abstract
Background: Generalized Structured Component Analysis (GSCA) is a component-based alternative to traditional covariance-based structural equation modelling. This method has previously been applied to test for association between candidate genes and clinical phenotypes, contrasting with traditional genetic association analyses that adopt univariate testing of many individual single nucleotide polymorphisms (SNPs) with correction for multiple testing.
Methods: We first evaluate the ability of the GSCA method to replicate two previous findings from a genetics association study of developmental language disorders. We then present the results of a simulation study to test the validity of the GSCA method under more restrictive data conditions, using smaller sample sizes and larger numbers of SNPs than have previously been investigated. Finally, we compare GSCA performance against univariate association analysis conducted using PLINK v1.9.
Results: Results from simulations show that power to detect effects depends not just on sample size, but also on the ratio of SNPs with effect to number of SNPs tested within a gene. Inclusion of many SNPs in a model dilutes true effects.
Conclusions: We propose that GSCA is a useful method for replication studies, when candidate SNPs have been identified, but should not be used for exploratory analysis.Additional information
data via OSF -
Thorin, J. (2020). Can you hear what you cannot say? The interactions of speech perception and production during non-native phoneme learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Todorova, L., & Neville, D. A. (2020). Associative and identity words promote the speed of visual categorization: A hierarchical drift diffusion account. Frontiers in Psychology, 11: 955. doi:10.3389/fpsyg.2020.00955.
Abstract
Words can either boost or hinder the processing of visual information, which can lead to facilitation or interference of the behavioral response. We investigated the stage (response execution or target processing) of verbal interference/facilitation in the response priming paradigm with a gender categorization task. Participants in our study were asked to judge whether the presented stimulus was a female or male face that was briefly preceded by a gender word either congruent (prime: “man,” target: “man”), incongruent (prime: “woman,” target: “man”) or neutral (prime: “day,” target: “man”) with respect to the face stimulus. We investigated whether related word-picture pairs resulted in faster reaction times in comparison to the neutral word-picture pairs (facilitation) and whether unrelated word-picture pairs resulted in slower reaction times in comparison to neutral word-picture pairs (interference). We further examined whether these effects (if any) map onto response conflict or aspects of target processing. In addition, identity (“man,” “woman”) and associative (“tie,” “dress”) primes were introduced to investigate the cognitive mechanisms of semantic and Stroop-like effects in response priming (introduced respectively by associations and identity words). We analyzed responses and reaction times using the drift diffusion model to examine the effect of facilitation and/or interference as a function of the prime type. We found that regardless of prime type words introduce a facilitatory effect, which maps to the processes of visual attention and response execution. -
Todorova, L., Neville, D. A., & Piai, V. (2020). Lexical-semantic and executive deficits revealed by computational modelling: A drift diffusion model perspective. Neuropsychologia, 146: 107560. doi:10.1016/j.neuropsychologia.2020.107560.
Abstract
Flexible language use requires coordinated functioning of two systems: conceptual representations and control. The interaction between the two systems can be observed when people are asked to match a word to a picture. Participants are slower and less accurate for related word-picture pairs (word: banana, picture: apple) relative to unrelated pairs (word: banjo, picture: apple). The mechanism underlying interference however is still unclear. We analyzed word-picture matching (WPM) performance of patients with stroke-induced lesions to the left-temporal (N = 5) or left-frontal cortex (N = 5) and matched controls (N = 12) using the drift diffusion model (DDM). In DDM, the process of making a decision is described as the stochastic accumulation of evidence towards a response. The parameters of the DDM model that characterize this process are decision threshold, drift rate, starting point and non-decision time, each of which bears cognitive interpretability. We compared the estimated model parameters from controls and patients to investigate the mechanisms of WPM interference. WPM performance in controls was explained by the amount of information needed to make a decision (decision threshold): a higher threshold was associated with related word-picture pairs relative to unrelated ones. No difference was found in the quality of the evidence (drift rate). This suggests an executive rather than semantic mechanism underlying WPM interference. Both patients with temporal and frontal lesions exhibited both increased drift rate and decision threshold for unrelated pairs relative to related ones. Left-frontal and temporal damage affected the computations required by WPM similarly, resulting in systematic deficits across lexical-semantic memory and executive functions. These results support a diverse but interactive role of lexical-semantic memory and semantic control mechanisms.Additional information
supplementary material -
Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78, 705-722. doi:10.1111/j.1467-8624.2007.01025.x.
Abstract
The current article proposes a new theory of infant pointing involving multiple layers of intentionality and shared intentionality. In the context of this theory, evidence is presented for a rich interpretation of prelinguistic communication, that is, one that posits that when 12-month-old infants point for an adult they are in some sense trying to influence her mental states. Moreover, evidence is also presented for a deeply social view in which infant pointing is best understood—on many levels and in many ways—as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication. -
Tourtouri, E. N. (2020). Rational redundancy in situated communication. PhD Thesis, Saarland University, Saarbrücken.
Abstract
Contrary to the Gricean maxims of Quantity (Grice, 1975), it has been repeatedly shown that speakers often include redundant information in their utterances (over- specifications). Previous research on referential communication has long debated whether this redundancy is the result of speaker-internal or addressee-oriented processes, while it is also unclear whether referential redundancy hinders or facilitates comprehension. We present a bounded-rational account of referential redundancy, according to which any word in an utterance, even if it is redundant, can be beneficial to comprehension, to the extent that it facilitates the reduction of listeners’ uncertainty regarding the target referent in a co-present visual scene. Information-theoretic metrics, such as Shannon’s entropy (Shannon, 1948), were employed in order to quantify this uncertainty in bits of information, and gain an estimate of the cognitive effort related to referential processing. Under this account, speakers may, therefore, utilise redundant adjectives in order to reduce the visually-determined entropy (and thereby their listeners’ cognitive effort) more uniformly across their utterances. In a series of experiments, we examined both the comprehension and the production of over-specifications in complex visual contexts. Our findings are in line with the bounded-rational account. Specifically, we present evidence that: (a) in view of complex visual scenes, listeners’ processing and identification of the target referent may be facilitated by the use of redundant adjectives, as well as by a more uniform reduction of uncertainty across the utterance, and (b) that, while both speaker-internal and addressee-oriented processes are at play in the production of over-specifications, listeners’ processing concerns may also influence the encoding of redundant adjectives, at least for some speakers, who encode redundant adjectives more frequently when these adjectives contribute to a more uniform reduction of referential entropy. -
Trilsbeek, P., & Wittenburg, P. (2007). "Los acervos lingüísticos digitales y sus desafíos". In J. Haviland, & F. Farfán (
Eds. ), Bases de la documentacíon lingüística (pp. 359-385). Mexico: Instituto Nacional de Lenguas Indígenas.Abstract
This chapter describes the challenges that modern digital language archives are faced with. One essential aspect of such an archive is to have a rich metadata catalog such that the archived resources can be easily discovered. The challenge of the archive is to obtain these rich metadata descriptions from the depositors without creating too much overhead for them. The rapid changes in storage technology, file formats and encoding standards make it difficult to build a long-lasting repository, therefore archives need to be set up in such a way that a straightforward and automated migration process to newer technology is possible whenever certain technology becomes obsolete. Other problems arise from the fact that there are many different groups of users of the archive, each of them with their own specific expectations and demands. Often conflicts exist between the requirements for different purposes of the archive, e.g. between long-term preservation of the data versus direct access to the resources via the web. The task of the archive is to come up with a technical solution that works well for most usage scenarios. -
Trujillo, J. P. (2020). Movement speaks for itself: The kinematic and neural dynamics of communicative action and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2020). The communicative advantage: How kinematic signaling supports semantic comprehension. Psychological Research, 84, 1897-1911. doi:10.1007/s00426-019-01198-y.
Abstract
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.Additional information
Supplementary material -
Trujillo, J. P., Simanova, I., Ozyurek, A., & Bekkering, H. (2020). Seeing the unexpected: How brains read communicative intent through kinematics. Cerebral Cortex, 30(3), 1056-1067. doi:10.1093/cercor/bhz148.
Abstract
Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling. -
Tsoukala, C., Frank, S. L., Van den Bosch, A., Kroff, J. V., & Broersma, M. (2020). Simulating Spanish-English code-switching: El modelo está generating code-switches. In E. Chersoni, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (
Eds. ), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (pp. 20-29). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL).Abstract
Multilingual speakers are able to switch from
one language to the other (“code-switch”) be-
tween or within sentences. Because the under-
lying cognitive mechanisms are not well un-
derstood, in this study we use computational
cognitive modeling to shed light on the pro-
cess of code-switching. We employed the
Bilingual Dual-path model, a Recurrent Neu-
ral Network of bilingual sentence production
(Tsoukala et al., 2017) and simulated sentence
production in simultaneous Spanish-English
bilinguals. Our first goal was to investigate
whether the model would code-switch with-
out being exposed to code-switched training
input. The model indeed produced code-
switches even without any exposure to such
input and the patterns of code-switches are
in line with earlier linguistic work (Poplack,
1980). The second goal of this study was to
investigate an auxiliary phrase asymmetry that
exists in Spanish-English code-switched pro-
duction. Using this cognitive model, we ex-
amined a possible cause for this asymmetry.
To our knowledge, this is the first computa-
tional cognitive model that aims to simulate
code-switched sentence production. -
Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in Meta-Analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50-61. doi:10.1027/2151-2604/a000393.
Abstract
Meta-analyses are an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. One important open question concerns the inclusion of unpublished data into meta-analyses. Finding such studies can be effortful, but their exclusion potentially leads to consequential biases like overestimation of a literature’s mean effect. We address two questions about unpublished data using MetaLab, a collection of community-augmented meta-analyses focused on developmental psychology. First, we assess to what extent MetaLab datasets include gray literature, and by what search strategies they are unearthed. We find that an average of 11% of datapoints are from unpublished literature; standard search strategies like database searches, complemented with individualized approaches like including authors’ own data, contribute the majority of this literature. Second, we analyze the effect of including versus excluding unpublished literature on estimates of effect size and publication bias, and find this decision does not affect outcomes. We discuss lessons learned and implications.Additional information
Link to Dataset on PsychArchives -
Tufvesson, S. (2007). Expressives. In A. Majid (
Ed. ), Field Manual Volume 10 (pp. 53-58). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492919.Additional information
2007_Expressives_audio_files.zip -
Tuinman, A., Mitterer, H., & Cutler, A. (2007). Speakers differentiate English intrusive and onset /r/, but L2 listeners do not. In J. Trouvain, & W. J. Barry (
Eds. ), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1905-1908). Dudweiler: Pirrot.Abstract
We investigated whether non-native listeners can exploit phonetic detail in recognizing potentially ambiguous utterances, as native listeners can [6, 7, 8, 9, 10]. Due to the phenomenon of intrusive /r/, the English phrase extra ice may sound like extra rice. A production study indicates that the intrusive /r/ can be distinguished from the onset /r/ in rice, as it is phonetically weaker. In two cross-modal identity priming studies, however, we found no conclusive evidence that Dutch learners of English are able to make use of this difference. Instead, auditory primes such as extra rice and extra ice with onset and intrusive /r/s activate both types of targets such as ice and rice. This supports the notion of spurious lexical activation in L2 perception. -
Tulling, M., Law, R., Cournane, A., & Pylkkänen, L. (2020). Neural correlates of modal displacement and discourse-updating under (un)certainty. eNeuro, 8(1): 0290-20.2020. doi:10.1523/ENEURO.0290-20.2020.
Abstract
A hallmark of human thought is the ability to think about not just the actual world, but also about alternative ways the world could be. One way to study this contrast is through language. Language has grammatical devices for expressing possibilities and necessities, such as the words might or must. With these devices, called “modal expressions,” we can study the actual vs. possible contrast in a highly controlled way. While factual utterances such as “There is a monster under my bed” update the here-and-now of a discourse model, a modal version of this sentence, “There might be a monster under my bed,” displaces from the here-and-now and merely postulates a possibility. We used magnetoencephalography (MEG) to test whether the processes of discourse updating and modal displacement dissociate in the brain. Factual and modal utterances were embedded in short narratives, and across two experiments, factual expressions increased the measured activity over modal expressions. However, the localization of the increase appeared to depend on perspective: signal localizing in right temporo-parietal areas increased when updating others’ beliefs, while frontal medial areas seem sensitive to updating one’s own beliefs. The presence of modal displacement did not elevate MEG signal strength in any of our analyses. In sum, this study identifies potential neural signatures of the process by which facts get added to our mental representation of the world.Competing Interest StatementThe authors have declared no competing interest.Additional information
Link to Preprint on BioRxiv -
Uhlmann, M. (2020). Neurobiological models of sentence processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Interleaved lexical and audiovisual information can retune phoneme boundaries. Attention, Perception & Psychophysics, 82, 2018-2026. doi:10.3758/s13414-019-01961-8.
Abstract
To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning. -
Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Audiovisual and lexical cues do not additively enhance perceptual adaptation. Psychonomic Bulletin & Review, 27, 707-715. doi:10.3758/s13423-020-01728-5.
Abstract
When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also
known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects
similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories.
Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or
hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.Additional information
Data and materials
Share this page