Displaying 601 - 700 of 787
-
Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.
Abstract
In different tasks involving action perception, performance has been found to be facilitated
when the presented stimuli were produced by the participants themselves rather than by
another participant. These results suggest that the same mental representations are
accessed during both production and perception. However, with regard to spoken word perception,
evidence also suggests that listeners’ representations for speech reflect the input
from their surrounding linguistic community rather than their own idiosyncratic productions.
Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners
to frame their interpretations of incoming speech signals with regard to speaker identity.
In order to determine whether word recognition evinces similar self-advantages as found in
action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech
representations accessed during perception of noise-vocoded speech are more reflective
of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech. -
Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2017). Mapping the speech code: Cortical responses linking the perception and production of vowels. Frontiers in Human Neuroscience, 11: 161. doi:10.3389/fnhum.2017.00161.
Abstract
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation -
Schuerman, W. L., Nagarajan, S., McQueen, J. M., & Houde, J. (2017). Sensorimotor adaptation affects perceptual compensation for coarticulation. The Journal of the Acoustical Society of America, 141(4), 2693-2704. doi:10.1121/1.4979791.
Abstract
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception -
Schuerman, W. L. (2017). Sensorimotor experience in speech perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G. and 2 moreSchuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G., Tzirakis, P., & Zafeiriou, S. (2017). The INTERSPEECH 2017 computational paralinguistics challenge: Addressee, cold & snoring. In Proceedings of Interspeech 2017 (pp. 3442-3446). doi:10.21437/Interspeech.2017-43.
Abstract
The INTERSPEECH 2017 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: In the Addressee sub-challenge, it has to be determined whether speech produced by an adult is directed towards another adult or towards a child; in the Cold sub-challenge, speech under cold has to be told apart from ‘healthy’ speech; and in the Snoring subchallenge, four different types of snoring have to be classified. In this paper, we describe these sub-challenges, their conditions, and the baseline feature extraction and classifiers, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series -
Sekine, K., Stam, G., Yoshioka, K., Tellier, M., & Capirci, O. (2015). Cross-linguistic views of gesture usage. Vigo International Journal of Applied linguistics VIAL, (12), 91-105.
Abstract
People have stereotypes about gesture usage. For instance, speakers in East Asia are not supposed to gesticulate, and it is believed that Italians gesticulate more than the British. Despite the prevalence of such views, studies that investigate these stereotypes are scarce. The present study examined peopleÕs views on spontaneous gestures by collecting data from five different countries. A total of 363 undergraduate students from five countries (France, Italy, Japan, the Netherlands and USA) participated in this study. Data were collected through a two-part questionnaire. Part 1 asked participants to rate two characteristics of gesture: frequency and size of gesture for 13 different languages. Part 2 asked them about their views on factors that might affect the production of gestures. The results showed that most participants in this study believe that Italian, Spanish, and American English speakers produce larger gestures more frequently than other language speakers. They also showed that each culture group, even within Europe, put weight on a slightly different aspect of gestures. -
Sekine, K., & Kita, S. (2015). Development of multimodal discourse comprehension: Cohesive use of space by gestures. Language, Cognition and Neuroscience, 30(10), 1245-1258. doi:10.1080/23273798.2015.1053814.
Abstract
This study examined how well 5-, 6-, 10-year-olds and adults integrated information from spoken discourse with cohesive use of space in gesture, in comprehension. In Experiment 1, participants were presented with a combination of spoken discourse and a sequence of cohesive gestures, which consistently located each of the two protagonists in two distinct locations in gesture space. Participants were asked to select an interpretation of the final sentence that best matched the preceding spoken and gestural contexts. Adults and 10-year-olds performed better than 5-year-olds, who were at chance level. In Experiment 2, another group of 5-year-olds was presented with the same stimuli as in Experiment 1, except that the actor showed hand-held pictures, instead of producing cohesive gestures. Unlike cohesive gestures, one set of pictures was self-explanatory and did not require integration with the concurrent speech to derive the referent. With these pictures, 5-year-olds performed nearly perfectly and their performance in the identifiable pictures was significantly better than those in the unidentifiable pictures. These results suggest that young children failed to integrate spoken discourse and cohesive use of space in gestures, because they cannot derive a referent of cohesive gestures from the local speech context. -
Sekine, K. (2017). Gestural hesitation reveals children’s competence on multimodal communication: Emergence of disguised adaptor. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (
Eds. ), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3113-3118). Austin, TX: Cognitive Science Society.Abstract
Speakers sometimes modify their gestures during the process of production into adaptors such as hair touching or eye scratching. Such disguised adaptors are evidence that the speaker can monitor their gestures. In this study, we investigated when and how disguised adaptors are first produced by children. Sixty elementary school children participated in this study (ten children in each age group; from 7 to 12 years old). They were instructed to watch a cartoon and retell it to their parents. The results showed that children did not produce disguised adaptors until the age of 8. The disguised adaptors accompany fluent speech until the children are 10 years old and accompany dysfluent speech until they reach 11 or 12 years of age. These results suggest that children start to monitor their gestures when they are 9 or 10 years old. Cognitive changes were considered as factors to influence emergence of disguised adaptors -
Sekine, K., Snowden, H., & Kita, S. (2015). The development of the ability to semantically integrate information in speech and iconic gesture in comprehension. Cognitive Science. doi:10.1111/cogs.12221.
Abstract
We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3-year-olds did not integrate information in speech and gesture, but 5-year-olds and adults did. In Experiment 2, 3-year-old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3-year-olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live-presentation facilitates the nascent integration ability in 3-year-olds. -
Sekine, K., & Kita, S. (2015). The parallel development of the form and meaning of two-handed gestures and linguistic information packaging within a clause in narrative. Open Linguistics, 1(1), 490-502. doi:10.1515/opli-2015-0015.
Abstract
We examined how two-handed gestures and speech with equivalent contents that are used in narrative develop during childhood. The participants were 40 native speakers of English consisting of four different age groups: 3-, 5-, 9-year-olds, and adults. A set of 10 video clips depicting motion events were used to elicit speech and gesture. There are two findings. First, two types of two-handed gestures showed different developmental changes: those with a single-handed stroke with a simultaneous hold increased with age, while those with a two handed-stroke decreased with age. Second, representational gesture and speech developed in parallel at the discourse level. More specifically, the ways in which information is packaged in a gesture and in a clause are similar for a given age group; that is, gesture and speech develop hand-in-hand. -
Sekine, K., & Kita, S. (2017). The listener automatically uses spatial story representations from the speaker's cohesive gestures when processing subsequent sentences without gestures. Acta Psychologica, 179, 89-95. doi:10.1016/j.actpsy.2017.07.009.
Abstract
This study examined spatial story representations created by speaker's cohesive gestures. Participants were presented with three-sentence discourse with two protagonists. In the first and second sentences, gestures consistently located the two protagonists in the gesture space: one to the right and the other to the left. The third sentence (without gestures) referred to one of the protagonists, and the participants responded with one of the two keys to indicate the relevant protagonist. The response keys were either spatially congruent or incongruent with the gesturally established locations for the two participants. Though the cohesive gestures did not provide any clue for the correct response, they influenced performance: the reaction time in the congruent condition was faster than that in the incongruent condition. Thus, cohesive gestures automatically establish spatial story representations and the spatial story representations remain activated in a subsequent sentence without any gesture. -
Senft, G. (2017). Absolute frames of spatial reference in Austronesian languages. Russian Journal of Linguistics, 21, 686-705. doi:10.22363/2312-9182-2017-21-4-686-705.
Abstract
This paper provides a brief survey on various absolute frames of spatial reference that can be observed in a number of Austronesian languages – with an emphasis on languages of the Oceanic subgroup. It is based on research of conceptions of space and systems of spatial reference that was initiated by the “space project” of the Cognitive Anthropology Research Group (now the Department of Language and Cognition) at the Max Planck Institute for Psycholinguistics and by my anthology “Referring to Space” (Senft 1997a; see Keller 2002: 250). The examples illustrating these different absolute frames of spatial reference reveal once more that earlier generalizations within the domain of “SPACE” were strongly biased by research on Indo-European languages; they also reveal how complex some of these absolute frames of spatial reference found in these languages are. The paper ends with a summary of Wegener’s (2002) preliminary typology of these absolute frames of spatial reference. -
Senft, G. (2017). Acquiring Kilivila Pragmatics - the Role of the Children's (Play-)Groups in the first 7 Years of their Lives on the Trobriand Islands in Papua New Guinea. Studies in Pragmatics, 19, 40-53.
Abstract
Trobriand children are breastfed until they can walk; then they are abruptly weaned and the parents dramatically reduce the pervasive loving care that their children experienced before. The children have to find a place within the children’s groups in their villages. They learn to behave according to their community’s rules and regulations which find their expression in forms of verbal and non-verbal behavior. They acquire their culture specific pragmatics under the control of older members of their groups. The children's “small republic” is the primary institution of verbal and cultural socialization. Attempts of parental education are confined to a minimum. -
Senft, G. (2017). "Control your emotions! If teasing provokes you, you've lost your face.." The Trobriand Islanders' control of their public display of emotions. In A. Storch (
Ed. ), Consensus and Dissent: Negotiating Emotion in the Public Space (pp. 59-80). Amsterdam: John Benjamins.Abstract
Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, has a rich inventory of terms - nouns, verbs, adjectives and idiomatic phrases and expressions - to precisely refer to, and to differentiate emotions and inner feelings. This paper describes how the Trobriand Islanders of Papua New Guinea deal with the public display of emotions. Forms of emotion control in public encounters are discussed and explained on the basis of ritual communication which pervades the Trobrianders' verbal and non-verbal behaviour. Especially highlighted is the Trobrianders' metalinguistic concept of "biga sopa" with its important role for emotion control in encounters that may run the risk of escalating from argument and conflict to aggression and violence. -
Senft, G. (2017). Imdeduya - Variants of a myth of love and hate from the Trobriand Islands of Papua New Guinea. Amsterdam: John Benjamins. doi:10.1075/clu.20.
Abstract
This volume presents five variants of the Imdeduya myth: two versions of the actual myth, a short story, a song and John Kasaipwalova’s English poem “Sail the Midnight Sun”. This poem draws heavily on the Trobriand myth which introduces the protagonists Imdeduya and Yolina and reports on Yolina’s intention to marry the girl so famous for her beauty, on his long journey to Imdeduya’s village and on their tragic love story. The texts are compared with each other with a final focus on the clash between orality and scripturality. Contrary to Kasaipwalova’s fixed poetic text, the oral Imdeduya versions reveal the variability characteristic for oral tradition. This variability opens up questions about traditional stability and destabilization of oral literature, especially questions about the changing role of myth – and magic – in the Trobriand Islanders' society which gets more and more integrated into the by now “literal” nation of Papua New Guinea. This e-book is available under the Creative Commons BY-NC-ND 4.0 license. -
Senft, G. (2017). Expressions for emotions - and inner feelings - in Kilivila, the language of the Trobriand Islanders: A descriptive and methodological critical essay. In N. Tersis, & P. Boyeldieu (
Eds. ), Le langage de l'emotion: Variations linguistiques et culturelles (pp. 349-376). Paris: Peeters.Abstract
This paper reports on the results of my research on the lexical means Kilivila offers its speakers to refer to emotions and inner feelings. Data were elicited with 18 “Ekman’s faces” in which photos of the faces of one woman and two men illustrate the allegedly universal basic emotions (anger, disgust, fear, happiness, sadness, surprise) and with film stimuli staging standard emotions. The data are discussed on the basis of the following research questions: * How “effable” are they or do we observe ineffability – the difficulty of putting experiences into words – within the domain of emotions? * Do consultants agree with one another in how they name emotions? * Are facial expressions or situations better cues for labeling? -
Senft, G. (2015). Tales from the Trobriand Islands of Papua New Guinea: Psycholinguistic and anthropological linguistic analyses of tales told by Trobriand children and adults. Amsterdam: John Benjamins.
Abstract
This volume presents 22 tales from the Trobriand Islands told by children (boys between the age of 5 and 9 years) and adults. The monograph is motivated not only by the anthropological linguistic aim to present a broad and quite unique collection of tales with the thematic approach to illustrate which topics and themes constitute the content of the stories, but also by the psycholinguistic and textlinguistic questions of how children acquire linearization and other narrative strategies, how they develop them and how they use them to structure these texts in an adult-like way. The tales are presented in morpheme-interlinear transcriptions with first textlinguistic analyses and cultural background information necessary to fully understand them. A summarizing comparative analysis of the texts from a psycholinguistic, anthropological linguistic and philological point of view discusses the underlying schemata of the stories, the means narrators use to structure them, their structural complexity and their cultural specificity. The e-book is made available under a CC BY-NC-ND 4.0 license. -
Senft, G. (2017). The Coral Gardens are Losing Their Magic: The Social and Cultural Impact of Climate Change and Overpopulation for the Trobriand Islanders. In A. T. von Poser, & A. von Poser (
Eds. ), Facets of Fieldwork - Essay in Honor of Jürg Wassmann (pp. 57-68). Heidelberg: Universitätsverlag Winter.Abstract
This paper deals with the dramatic environmental, social and cultural changes on the Trobriand Islands which I experienced during 16 long- and short-term fieldtrips from 1982 to 2012. I first report on the climate change I experienced there over the years and provide a survey about the demographic changes on the Trobriand Islands – highlighting the situation in Tauwema, my village of residence on Kaile’una Island. Finally I report on the social and cultural impact these dramatic changes have for the Trobriand Islanders and their culture. -
Senft, G. (2015). The Trobriand Islanders' concept of karewaga. In S. Lestrade, P. de Swart, & L. Hogeweg (
Eds. ), Addenda. Artikelen voor Ad Foolen (pp. 381-390). Nijmegen: Radboud University. -
Senft, G. (2017). Understanding Pragmatics (Japanese edition). Tokyo: Kaitaku-Sha.
-
Seuren, P. A. M. (2015). Prestructuralist and structuralist approaches to syntax. In T. Kiss, & A. Alexiadou (
Eds. ), Syntax--theory and analysis: An international handbook (pp. 134-157). Berlin: Mouton de Gruyter. -
Seuren, P. A. M. (2015). Taal is complexer dan je denkt - recursief. In S. Lestrade, P. De Swart, & L. Hogeweg (
Eds. ), Addenda. Artikelen voor Ad Foolen (pp. 393-400). Nijmegen: Radboud University. -
Seuren, P. A. M. (2015). Unconscious elements in linguistic communication: Language and social reality. Empedocles: European Journal for the Philosophy of Communication, 6, 185-194. doi:10.1386/ejpc.6.2.185_1.
Abstract
The message of the present article is, first, that, besides and below the strictly linguistic aspects of communication through language, of which speakers are in principle fully aware, a great deal of knowledge not carried in virtue of the system of the language in question but rather transmitted by the form of the intended message, is imparted to listeners or readers, without either being in the least aware of this happening. For example, listeners quickly register the social status, regional origin or emotional attitude of speakers and they react to those kinds of ‘paralinguistic’ information, mostly totally unawares. When speaker and listener have a positive attitude with regard to each other, the reaction consists, among other things, in mutual alignment or accommodation of pronunciation features, lexical selections and style of speaking. When the mutual attitude is negative, the opposite happens: speakers accentuate their differences. Then, when this happens not between individual interlocutors but between groups of speakers, such accommodation or divergence phenomena may lead to language change. The main theoretical question raised, but not answered, in this article is how and at what point forms of behaviour, including linguistic behaviour, achieve the status of being ‘standard’ or ‘accepted’ in any given community and what it means to say that they are ‘standard’ or ‘accepted’. It is argued that frequency of occurrence is not the main explanatory factor, and that a causal explanation is to be sought rather in the, often unconscious, attitudes of individuals, in particular their desire or need to be integrated members of a community or social group, thus ensuring their safety and asserting their group identity. The question thus belongs to the province of social psychology. Qualms about analyses of this kind being ‘unscientific’ dissipate when it is realized that consciousness phenomena are part of the real world and must therefore be considered to be valid objects of scientific theory formation. Like so many other ill-understood elements in scientific theories, consciousness, though itself unexplained, can be given a place in causal chains of events. -
Shao, Z., Roelofs, A., Martin, R., & Meyer, A. S. (2015). Selective inhibition and naming performance in semantic blocking, picture-word interference, and color-word stroop tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41, 1806-1820. doi:10.1037/a0039363.
Abstract
In two studies, we examined whether explicit distractors are necessary and sufficient toevoke selective inhibition in three naming tasks: the semantic blocking, picture-word interference, and color-word Stroop task. Delta plots were used to quantify the size of the interference effects as a function of reaction time (RT). Selective inhibition was operationalized as the decrease in the size of the interference effect as a function of naming RT. For all naming tasks, mean naming RTs were significantly longer in the interference condition than in a control condition. The slopes of the interference effects for the longest naming RTs correlated with the magnitude of the mean interference effect in both the semantic blocking task and the picture-word interference task, suggesting that selective inhibition was involved to reduce the interference from strong semantic competitors either invoked by a single explicit competitor or strong implicit competitors in picture naming. However, there was no correlation between the slopes and the mean interference effect in the Stroop task, suggesting less importance of selective inhibition in this task despite explicit distractors. Whereas the results of the semantic blocking task suggest that an explicit distractor is not necessary for triggering inhibition, the results of the Stroop task suggest that such a distractor is not sufficient for evoking inhibition either. -
Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.
Abstract
Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an EEG study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type. -
Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M. C. M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.
Abstract
Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture–word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type. -
Shitova, N., Roelofs, A., Coughler, C., & Schriefers, H. (2017). P3 event-related brain potential reflects allocation and use of central processing capacity in language production. Neuropsychologia, 106, 138-145. doi:10.1016/j.neuropsychologia.2017.09.024.
Abstract
Allocation and use of central processing capacity have been associated with the P3 event-related brain potential amplitude in a large variety of non-linguistic tasks. However, little is known about the P3 in spoken language production. Moreover, the few studies that are available report opposing P3 effects when task complexity is manipulated. We investigated allocation and use of central processing capacity in a spoken phrase production task: Participants switched every second trial between describing pictures using noun phrases with one adjective (size only; simple condition, e.g., “the big desk”) or two adjectives (size and color; complex condition, e.g., “the big red desk”). Capacity allocation was manipulated by complexity, and capacity use by switching. Response time (RT) was longer for complex than for simple trials. Moreover, complexity and switching interacted: RTs were longer on switch than on repeat trials for simple phrases but shorter on switch than on repeat trials for complex phrases. P3 amplitude increased with complexity. Moreover, complexity and switching interacted: The complexity effect was larger on the switch trials than on the repeat trials. These results provide evidence that the allocation and use of central processing capacity in language production are differentially reflected in the P3 amplitude. -
Sicoli, M. A., Stivers, T., Enfield, N. J., & Levinson, S. C. (2015). Marked initial pitch in questions signals marked communicative function. Language and Speech, 58(2), 204-223. doi:10.1177/0023830914529247.
Abstract
In conversation, the initial pitch of an utterance can provide an early phonetic cue of the communicative function, the speech act, or the social action being implemented. We conducted quantitative acoustic measurements and statistical analyses of pitch in over 10,000 utterances, including 2512 questions, their responses, and about 5000 other utterances by 180 total speakers from a corpus of 70 natural conversations in 10 languages. We measured pitch at first prominence in a speaker’s utterance and discriminated utterances by language, speaker, gender, question form, and what social action is achieved by the speaker’s turn. Through applying multivariate logistic regression we found that initial pitch that significantly deviated from the speaker’s median pitch level was predictive of the social action of the question. In questions designed to solicit agreement with an evaluation rather than information, pitch was divergent from a speaker’s median predictably in the top 10% of a speakers range. This latter finding reveals a kind of iconicity in the relationship between prosody and social action in which a marked pitch correlates with a marked social action. Thus, we argue that speakers rely on pitch to provide an early signal for recipients that the question is not to be interpreted through its literal semantics but rather through an inference. -
Silva, S., Inácio, F., Folia, V., & Petersson, K. M. (2017). Eye movements in implicit artificial grammar learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1387-1402. doi:10.1037/xlm0000350.
Abstract
Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests -
Silva, S., Petersson, K. M., & Castro, S. L. (2017). The effects of ordinal load on incidental temporal learning. Quarterly Journal of Experimental Psychology, 70(4), 664-674. doi:10.1080/17470218.2016.1146909.
Abstract
How can we grasp the temporal structure of events? A few studies have indicated that representations of temporal structure are acquired when there is an intention to learn, but not when learning is incidental. Response-to-stimulus intervals, uncorrelated temporal structures, unpredictable ordinal information, and lack of metrical organization have been pointed out as key obstacles to incidental temporal learning, but the literature includes piecemeal demonstrations of learning under all these circumstances. We suggest that the unacknowledged effects of ordinal load may help reconcile these conflicting findings, ordinal load referring to the cost of identifying the sequence of events (e.g., tones, locations) where a temporal pattern is embedded. In a first experiment, we manipulated ordinal load into simple and complex levels. Participants learned ordinal-simple sequences, despite their uncorrelated temporal structure and lack of metrical organization. They did not learn ordinal-complex sequences, even though there were no response-to-stimulus intervals nor unpredictable ordinal information. In a second experiment, we probed learning of ordinal-complex sequences with strong metrical organization, and again there was no learning. We conclude that ordinal load is a key obstacle to incidental temporal learning. Further analyses showed that the effect of ordinal load is to mask the expression of temporal knowledge, rather than to prevent learning. -
Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.
Abstract
The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test. -
Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.
Abstract
In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production -
Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.
Abstract
Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology. -
Simpson, N. H., Ceroni, F., Reader, R. H., Covill, L. E., Knight, J. C., the SLI Consortium, Hennessy, E. R., Bolton, P. F., Conti-Ramsden, G., O’Hare, A., Baird, G., Fisher, S. E., & Newbury, D. F. (2015). Genome-wide analysis identifies a role for common copy number variants in specific language impairment. European Journal of Human Genetics, 23, 1370-1377. doi:10.1038/ejhg.2014.296.
Abstract
An exploratory genome-wide copy number variant (CNV) study was performed in 127 independent cases with specific language impairment (SLI), their first-degree relatives (385 individuals) and 269 population controls. Language-impaired cases showed an increased CNV burden in terms of the average number of events (11.28 vs 10.01, empirical P=0.003), the total length of CNVs (717 vs 513 Kb, empirical P=0.0001), the average CNV size (63.75 vs 51.6 Kb, empirical P=0.0005) and the number of genes spanned (14.29 vs 10.34, empirical P=0.0007) when compared with population controls, suggesting that CNVs may contribute to SLI risk. A similar trend was observed in first-degree relatives regardless of affection status. The increased burden found in our study was not driven by large or de novo events, which have been described as causative in other neurodevelopmental disorders. Nevertheless, de novo CNVs might be important on a case-by-case basis, as indicated by identification of events affecting relevant genes, such as ACTR2 and CSNK1A1, and small events within known micro-deletion/-duplication syndrome regions, such as chr8p23.1. Pathway analysis of the genes present within the CNVs of the independent cases identified significant overrepresentation of acetylcholine binding, cyclic-nucleotide phosphodiesterase activity and MHC proteins as compared with controls. Taken together, our data suggest that the majority of the risk conferred by CNVs in SLI is via common, inherited events within a ‘common disorder–common variant’ model. Therefore the risk conferred by CNVs will depend upon the combination of events inherited (both CNVs and SNPs), the genetic background of the individual and the environmental factors. -
Sjerps, M. J., & Reinisch, E. (2015). Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 710-722. doi:10.1037/a0039028.
Abstract
Listeners have to overcome variability of the speech signal that can arise, for example, because of differences in room acoustics, differences in speakers’ vocal tract properties, or idiosyncrasies in pronunciation. Two mechanisms that are involved in resolving such variation are perceptually contrastive effects that arise from surrounding acoustic context and lexically guided perceptual learning. Although both processes have been studied in great detail, little attention has been paid to how they operate relative to each other in speech perception. The present study set out to address this issue. The carrier parts of exposure stimuli of a classical perceptual learning experiment were spectrally filtered such that the acoustically ambiguous final fricatives sounded relatively more like the lexically intended sound (Experiment 1) or the alternative (Experiment 2). Perceptual learning was found only in the latter case. The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in terms of cognitive processing levels as well -
Sjerps, M. J., & Meyer, A. S. (2015). Variation in dual-task performance reveals late initiation of speech planning in turn-taking. Cognition, 136, 304-324. doi:10.1016/j.cognition.2014.10.008.
Abstract
The smooth transitions between turns in natural conversation suggest that speakers often begin to plan their utterances while listening to their interlocutor. The presented study investigates whether this is indeed the case and, if so, when utterance planning begins. Two hypotheses were contrasted: that speakers begin to plan their turn as soon as possible (in our experiments less than a second after the onset of the interlocutor’s turn), or that they do so close to the end of the interlocutor’s turn. Turn-taking was combined with a finger tapping task to measure variations in cognitive load. We assumed that the onset of speech planning in addition to listening would be accompanied by deterioration in tapping performance. Two picture description experiments were conducted. In both experiments there were three conditions: (1) Tapping and Speaking, where participants tapped a complex pattern while taking over turns from a pre-recorded speaker, (2) Tapping and Listening, where participants carried out the tapping task while overhearing two pre-recorded speakers, and (3) Speaking Only, where participants took over turns as in the Tapping and Speaking condition but without tapping. The experiments differed in the amount of tapping training the participants received at the beginning of the session. In Experiment 2, the participants’ eye-movements were recorded in addition to their speech and tapping. Analyses of the participants’ tapping performance and eye movements showed that they initiated the cognitively demanding aspects of speech planning only shortly before the end of the turn of the preceding speaker. We argue that this is a smart planning strategy, which may be the speakers’ default in many everyday situations. -
Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.
Abstract
Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition. -
Skirgard, H., Roberts, S. G., & Yencken, L. (2017). Why are some languages confused for others? Investigating data from the Great Language Game. PLoS One, 12(4): e0165934. doi:10.1371/journal.pone.0165934.
Abstract
In this paper we explore the results of a large-scale online game called ‘the Great Language Game’, in which people listen to an audio speech sample and make a forced-choice guess about the identity of the language from 2 or more alternatives. The data include 15 million guesses from 400 audio recordings of 78 languages. We investigate which languages are confused for which in the game, and if this correlates with the similarities that linguists identify between languages. This includes shared lexical items, similar sound inventories and established historical relationships. Our findings are, as expected, that players are more likely to confuse two languages that are objectively more similar. We also investigate factors that may affect players’ ability to accurately select the target language, such as how many people speak the language, how often the language is mentioned in written materials and the economic power of the target language community. We see that non-linguistic factors affect players’ ability to accurately identify the target. For example, languages with wider ‘global reach’ are more often identified correctly. This suggests that both linguistic and cultural knowledge influence the perception and recognition of languages and their similarity. -
Sleegers, K., Bettens, K., De Roeck, A., Van Cauwenberghe, C., Cuyvers, E., Verheijen, J., Struyfs, H., Van Dongen, J., Vermeulen, S., Engelborghs, S., Vandenbulcke, M., Vandenberghe, R., De Deyn, P., Van Broeckhoven, C., & BELNEU consortium (2015). A 22-single nucleotide polymorphism Alzheimer's disease risk score correlates with family history, onset age, and cerebrospinal fluid Aβ42. Alzheimer's & Dementia, 11(12), 1452-1460. doi:10.1016/j.jalz.2015.02.013.
Abstract
Introduction The ability to identify individuals at increased genetic risk for Alzheimer's disease (AD) may streamline biomarker and drug trials and aid clinical and personal decision making. Methods We evaluated the discriminative ability of a genetic risk score (GRS) covering 22 published genetic risk loci for AD in 1162 Flanders-Belgian AD patients and 1019 controls and assessed correlations with family history, onset age, and cerebrospinal fluid (CSF) biomarkers (Aβ1-42, T-Tau, P-Tau181P). Results A GRS including all single nucleotide polymorphisms (SNPs) and age-specific APOE ε4 weights reached area under the curve (AUC) 0.70, which increased to AUC 0.78 for patients with familial predisposition. Risk of AD increased with GRS (odds ratio, 2.32 (95% confidence interval 2.08-2.58 per unit; P < 1.0e-15). Onset age and CSF Aβ1-42 decreased with increasing GRS (Ponset-age = 9.0e-11; PAβ = 8.9e-7). Discussion The discriminative ability of this 22-SNP GRS is still limited, but these data illustrate that incorporation of age-specific weights improves discriminative ability. GRS-phenotype correlations highlight the feasibility of identifying individuals at highest susceptibility. © 2015 The Authors. -
Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. doi:10.1016/j.pragma.2017.04.004.
Abstract
This study investigates whether there is a universal tendency for content
interrogative words (wh-words) within a language to sound similar in order to facilitate
pragmatic inference in conversation. Gaps between turns in conversation are very
short, meaning that listeners must begin planning their turn as soon as possible.
While previous research has shown that paralinguistic features such as prosody and
eye gaze provide cues to the pragmatic function of upcoming turns, we hypothesise
that a systematic phonetic cue that marks interrogative words would also help early
recognition of questions (allowing early preparation of answers), for instance wh-
words sounding similar within a language. We analyzed 226 languages from 66
different language families by means of permutation tests. We found that initial
segments of wh-words were more similar within a language than between languages,
also when controlling for language family, geographic area (stratified permutation)
and analyzability (compound phrases excluded). Random samples tests revealed that
initial segments of wh-words were more similar than initial segments of randomly
selected word sets and conceptually related word sets (e.g., body parts, actions,
pronouns). Finally, we hypothesized that this cue would be more useful at the
beginning of a turn, so the similarity of the initial segment of wh-words should be
greater in languages that place them at the beginning of a clause. We gathered
typological data on 110 languages, and found the predicted trend, although statistical
significance was not attained. While there may be several mechanisms that bring
about this pattern (e.g., common derivation), we suggest that the ultimate explanation
of the similarity of interrogative words is to facilitate early speech-act recognition.
Importantly, this hypothesis can be tested empirically, and the current results provide
a sound basis for future experimental tests.Additional information
http://www.sciencedirect.com/science/article/pii/S037821661630577X -
Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics:The role of the first phoneme in question prediction in context. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (
Eds. ), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1090-1095). Austin, TX: Cognitive Science Society.Abstract
Turn-taking in conversation is a cognitively demanding process that proceeds rapidly due to interlocutors utilizing a range of cues
to aid prediction. In the present study we set out to test recent claims that content question words (also called wh-words) sound similar within languages as an adaptation to help listeners predict
that a question is about to be asked. We test whether upcoming questions can be predicted based on the first phoneme of a turn and the prior context. We analyze the Switchboard corpus of English
by means of a decision tree to test whether /w/ and /h/ are good statistical cues of upcoming questions in conversation. Based on the results, we perform a controlled experiment to test whether
people really use these cues to recognize questions. In both studies
we show that both the initial phoneme and the sequential context help predict questions. This contributes converging evidence that elements of languages adapt to pragmatic pressures applied during
conversation. -
Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: markers of communicative relevance of gesture during demonstration to adults and children. In G. Ferré, & M. Tutton (
Eds. ), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 217-222). Nantes: Universite of Nantes.Abstract
Speakers adapt their speech and gestures in various ways for their audience. We investigated further whether they use
ostensive signals (eye gaze, ostensive speech (e.g. like this, this) or a combination of both) in relation to their gestures
when talking to different addressees, i.e., to another adult or a child in a multimodal demonstration task. While adults used
more eye gaze towards their gestures with other adults than with children, they were more likely to use combined
ostensive signals for children than for adults. Thus speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account. -
Smeets, C. J. L. M., Jezierska, J., Watanabe, H., Duarri, A., Fokkens, M. R., Meijer, M., Zhou, Q., Yakovleva, T., Boddeke, E., den Dunnen, W., van Deursen, J., Bakalkin, G., Kampinga, H. H., van de Sluis, B., & S. Verbeek, D. (2015). Elevated mutant dynorphin A causes Purkinje cell loss and motor dysfunction in spinocerebellar ataxia type 23. Brain, 138(9), 2537-2552. doi:10.1093/brain/awv195.
Abstract
Spinocerebellar ataxia type 23 is caused by mutations in PDYN, which encodes the opioid neuropeptide precursor protein, prodynorphin. Prodynorphin is processed into the opioid peptides, α-neoendorphin, and dynorphins A and B, that normally exhibit opioid-receptor mediated actions in pain signalling and addiction. Dynorphin A is likely a mutational hotspot for spinocerebellar ataxia type 23 mutations, and in vitro data suggested that dynorphin A mutations lead to persistently elevated mutant peptide levels that are cytotoxic and may thus play a crucial role in the pathogenesis of spinocerebellar ataxia type 23. To further test this and study spinocerebellar ataxia type 23 in more detail, we generated a mouse carrying the spinocerebellar ataxia type 23 mutation R212W in PDYN. Analysis of peptide levels using a radioimmunoassay shows that these PDYNR212W mice display markedly elevated levels of mutant dynorphin A, which are associated with climber fibre retraction and Purkinje cell loss, visualized with immunohistochemical stainings. The PDYNR212W mice reproduced many of the clinical features of spinocerebellar ataxia type 23, with gait deficits starting at 3 months of age revealed by footprint pattern analysis, and progressive loss of motor coordination and balance at the age of 12 months demonstrated by declining performances on the accelerating Rotarod. The pathologically elevated mutant dynorphin A levels in the cerebellum coincided with transcriptionally dysregulated ionotropic and metabotropic glutamate receptors and glutamate transporters, and altered neuronal excitability. In conclusion, the PDYNR212W mouse is the first animal model of spinocerebellar ataxia type 23 and our work indicates that the elevated mutant dynorphin A peptide levels are likely responsible for the initiation and progression of the disease, affecting glutamatergic signalling, neuronal excitability, and motor performance. Our novel mouse model defines a critical role for opioid neuropeptides in spinocerebellar ataxia, and suggests that restoring the elevated mutant neuropeptide levels can be explored as a therapeutic intervention. -
Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.
Abstract
Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.Additional information
http://www.sciencedirect.com/science/article/pii/S0749596X16301425 -
Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Smorenburg, L., Rodd, J., & Chen, A. (2015). The effect of explicit training on the prosodic production of L2 sarcasm by Dutch learners of English. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.Abstract
Previous research [9] suggests that Dutch learners of (British) English are not able to express sarcasm prosodically in their L2. The present study investigates whether explicit training on the prosodic markers of sarcasm in English can improve learners’ realisation of sarcasm. Sarcastic speech was elicited in short simulated telephone conversations between Dutch advanced learners of English and a native British English-speaking ‘friend’ in two sessions, fourteen days apart. Between the two sessions, participants were trained by means of (1) a presentation, (2) directed independent practice, and (3) evaluation of participants’ production and individual feedback in small groups. L1 British English-speaking raters subsequently evaluated the degree of sarcastic sounding in the participants’ responses on a five-point scale. It was found that significantly higher sarcasm ratings were given to L2 learners’ production obtained after the training than that obtained before the training; explicit training on prosody has a positive effect on learners’ production of sarcasm.Additional information
http://www.icphs2015.info/pdfs/Papers/ICPHS0959.pdf -
Sollis, E., Deriziotis, P., Saitsu, H., Miyake, N., Matsumoto, N., J.V.Hoffer, M. J. V., Ruivenkamp, C. A., Alders, M., Okamoto, N., Bijlsma, E. K., Plomp, A. S., & Fisher, S. E. (2017). Equivalent missense variant in the FOXP2 and FOXP1 transcription factors causes distinct neurodevelopmental disorders. Human Mutation, 38(11), 1542-1554. doi:10.1002/humu.23303.
Abstract
The closely related paralogues FOXP2 and FOXP1 encode transcription factors with shared functions in the development of many tissues, including the brain. However, while mutations in FOXP2 lead to a speech/language disorder characterized by childhood apraxia of speech (CAS), the clinical profile of FOXP1 variants includes a broader neurodevelopmental phenotype with global developmental delay, intellectual disability and speech/language impairment. Using clinical whole-exome sequencing, we report an identical de novo missense FOXP1 variant identified in three unrelated patients. The variant, p.R514H, is located in the forkhead-box DNA-binding domain and is equivalent to the well-studied p.R553H FOXP2 variant that co-segregates with CAS in a large UK family. We present here for the first time a direct comparison of the molecular and clinical consequences of the same mutation affecting the equivalent residue in FOXP1 and FOXP2. Detailed functional characterization of the two variants in cell model systems revealed very similar molecular consequences, including aberrant subcellular localization, disruption of transcription factor activity and deleterious effects on protein interactions. Nonetheless, clinical manifestations were broader and more severe in the three cases carrying the p.R514H FOXP1 variant than in individuals with the p.R553H variant related to CAS, highlighting divergent roles of FOXP2 and FOXP1 in neurodevelopment.Additional information
humu23303-sup-0001-SuppMat.pdf -
Sonnweber, R., Ravignani, A., & Fitch, W. T. (2015). Non-adjacent visual dependency learning in chimpanzees. Animal Cognition, 18(3), 733-745. doi:10.1007/s10071-015-0840-x.
Abstract
Humans have a strong proclivity for structuring and patterning stimuli: Whether in space or time, we tend to mentally order stimuli in our environment and organize them into units with specific types of relationships. A crucial prerequisite for such organization is the cognitive ability to discern and process regularities among multiple stimuli. To investigate the evolutionary roots of this cognitive capacity, we tested chimpanzees—which, along with bonobos, are our closest living relatives—for simple, variable distance dependency processing in visual patterns. We trained chimpanzees to identify pairs of shapes either linked by an arbitrary learned association (arbitrary associative dependency) or a shared feature (same shape, feature-based dependency), and to recognize strings where items related to either of these ways occupied the first (leftmost) and the last (rightmost) item of the stimulus. We then probed the degree to which subjects generalized this pattern to new colors, shapes, and numbers of interspersed items. We found that chimpanzees can learn and generalize both types of dependency rules, indicating that the ability to encode both feature-based and arbitrary associative regularities over variable distances in the visual domain is not a human prerogative. Our results strongly suggest that these core components of human structural processing were already present in our last common ancestor with chimpanzees.Additional information
supplementary material -
Sonnweber, R. S., Ravignani, A., Stobbe, N., Schiestl, G., Wallner, B., & Fitch, W. T. (2015). Rank‐dependent grooming patterns and cortisol alleviation in Barbary macaques. American Journal of Primatology, 77(6), 688-700. doi:10.1002/ajp.22391.
Abstract
Flexibly adapting social behavior to social and environmental challenges helps to alleviate glucocorticoid (GC) levels, which may have positive fitness implications for an individual. For primates, the predominant social behavior is grooming. Giving grooming to others is particularly efficient in terms of GC mitigation. However, grooming is confined by certain limitations such as time constraints or restricted access to other group members. For instance, dominance hierarchies may impact grooming partner availability in primate societies. Consequently specific grooming patterns emerge. In despotic species focusing grooming activity on preferred social partners significantly ameliorates GC levels in females of all ranks. In this study we investigated grooming patterns and GC management in Barbary macaques, a comparably relaxed species. We monitored changes in grooming behavior and cortisol (C) for females of different ranks. Our results show that the C‐amelioration associated with different grooming patterns had a gradual connection with dominance hierarchy: while higher‐ranking individuals showed lowest urinary C measures when they focused their grooming on selected partners within their social network, lower‐ranking individuals expressed lowest C levels when dispersing their grooming activity evenly across their social partners. We argue that the relatively relaxed social style of Barbary macaque societies allows individuals to flexibly adapt grooming patterns, which is associated with rank‐specific GC management. Am. J. Primatol. 77:688–700, 2015 -
De Sousa, H., Langella, F., & Enfield, N. J. (2015). Temperature terms in Lao, Southern Zhuang, Southern Pinghua and Cantonese. In M. Koptjevskaja-Tamm (
Ed. ), The linguistics of temperature (pp. 594-638). Amsterdam: Benjamins. -
Soutschek, A., Burke, C. J., Beharelle, A. R., Schreiber, R., Weber, S. C., Karipidis, I. I., Ten Velden, J., Weber, B., Haker, H., Kalenscher, T., & Tobler, P. N. (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1, 819-827. doi:10.1038/s41562-017-0226-y.
Abstract
Women are known to have stronger prosocial preferences than men, but it remains an open question as to how these behavioural differences arise from differences in brain functioning. Here, we provide a neurobiological account for the hypothesized gender difference. In a pharmacological study and an independent neuroimaging study, we tested the hypothesis that the neural reward system encodes the value of sharing money with others more strongly in women than in men. In the pharmacological study, we reduced receptor type-specific actions of dopamine, a neurotransmitter related to reward processing, which resulted in more selfish decisions in women and more prosocial decisions in men. Converging findings from an independent neuroimaging study revealed gender-related activity in neural reward circuits during prosocial decisions. Thus, the neural reward system appears to be more sensitive to prosocial rewards in women than in men, providing a neurobiological account for why women often behave more prosocially than men.
A large body of evidence suggests that women are often more prosocial (for example, generous, altruistic and inequality averse) than men, at least when other factors such as reputation and strategic considerations are excluded1,2,3. This dissociation could result from cultural expectations and gender stereotypes, because in Western societies women are more strongly expected to be prosocial4,5,6 and sensitive to variations in social context than men1. It remains an open question, however, whether and how on a neurobiological level the social preferences of women and men arise from differences in brain functioning. The assumption of gender differences in social preferences predicts that the neural reward system’s sensitivity to prosocial and selfish rewards should differ between women and men. Specifically, the hypothesis would be that the neural reward system is more sensitive to prosocial than selfish rewards in women and more sensitive to selfish than prosocial rewards in men. The goal of the current study was to test in two independent experiments for the hypothesized gender differences on both a pharmacological and a haemodynamic level. In particular, we examined the functions of the neurotransmitter dopamine using a dopamine receptor antagonist, and the role of the striatum (a brain region strongly innervated by dopamine neurons) during social decision-making in women and men using neuroimaging.
The neurotransmitter dopamine is thought to play a key role in neural reward processing7,8. Recent evidence suggests that dopaminergic activity is sensitive not only to rewards for oneself but to rewards for others as well9. The assumption that dopamine is sensitive to both self- and other-related outcomes is consistent with the finding that the striatum shows activation for both selfish and shared rewards10,11,12,13,14,15. The dopaminergic response may represent a net signal encoding the difference between the value of preferred and unpreferred rewards8. Regarding the hypothesized gender differences in social preferences, this account makes the following predictions. If women prefer shared (prosocial) outcomes2, women’s dopaminergic signals to shared rewards will be stronger than to non-shared (selfish) rewards, so reducing dopaminergic activity should bias women to make more selfish decisions. In line with this hypothesis, a functional imaging study reported enhanced striatal activation in female participants during charitable donations11. In contrast, if men prefer selfish over prosocial rewards, dopaminergic activity should be enhanced to selfish compared to prosocial rewards. In line with this view, upregulating dopaminergic activity in a sample of exclusively male participants increased selfish behaviour in a bargaining game16. Thus, contrary to the hypothesized effect in women, reducing dopaminergic neurotransmission should render men more prosocial. Taken together, the current study tested the following three predictions: we expected the dopaminergic reward system (1) to be more sensitive to prosocial than selfish rewards in women and (2) to be more sensitive to selfish than prosocial rewards in men. As a consequence of these two predictions, we also predicted (3) dopaminoceptive regions such as the striatum to show stronger activation to prosocial relative to selfish rewards in women than in men.
To test these predictions, we conducted a pharmacological study in which we reduced dopaminergic neurotransmission with amisulpride. Amisulpride is a dopamine antagonist that is highly specific for dopaminergic D2/D3 receptors17. After receiving amisulpride or placebo, participants performed an interpersonal decision task18,19,20, in which they made choices between a monetary reward only for themselves (selfish reward option) and sharing money with others (prosocial reward option). We expected that blocking dopaminergic neurotransmission with amisulpride, relative to placebo, would result in fewer prosocial choices in women and more prosocial choices in men. To investigate whether potential gender-related effects of dopamine are selective for social decision-making, we also tested the effects of amisulpride on time preferences in a non-social control task that was matched to the interpersonal decision task in terms of choice structure.
In addition, because dopaminergic neurotransmission plays a crucial role in brain regions involved in value processing, such as the striatum21, a gender-related role of dopaminergic activity for social decision-making should also be reflected by dissociable activity patterns in the striatum. Therefore, to further test our hypothesis, we investigated the neural correlates of social decision-making in a functional imaging study. In line with our predictions for the pharmacological study, we expected to find stronger striatum activity during prosocial relative to selfish decisions in women, whereas men should show enhanced activity in the striatum for selfish relative to prosocial choices.Additional information
Supplementary Information -
Spaeth, J. M., Hunter, C. S., Bonatakis, L., Guo, M., French, C. A., Slack, I., Hara, M., Fisher, S. E., Ferrer, J., Morrisey, E. E., Stanger, B. Z., & Stein, R. (2015). The FOXP1, FOXP2 and FOXP4 transcription factors are required for islet alpha cell proliferation and function in mice. Diabetologia, 58, 1836-1844. doi:10.1007/s00125-015-3635-3.
Abstract
Aims/hypothesis Several forkhead box (FOX) transcription factor family members have important roles in controlling pancreatic cell fates and maintaining beta cell mass and function, including FOXA1, FOXA2 and FOXM1. In this study we have examined the importance of FOXP1, FOXP2 and FOXP4 of the FOXP subfamily in islet cell development and function. Methods Mice harbouring floxed alleles for Foxp1, Foxp2 and Foxp4 were crossed with pan-endocrine Pax6-Cre transgenic mice to generate single and compound Foxp mutant mice. Mice were monitored for changes in glucose tolerance by IPGTT, serum insulin and glucagon levels by radioimmunoassay, and endocrine cell development and proliferation by immunohistochemistry. Gene expression and glucose-stimulated hormone secretion experiments were performed with isolated islets. Results Only the triple-compound Foxp1/2/4 conditional knockout (cKO) mutant had an overt islet phenotype, manifested physiologically by hypoglycaemia and hypoglucagonaemia. This resulted from the reduction in glucagon-secreting alpha cell mass and function. The proliferation of alpha cells was profoundly reduced in Foxp1/2/4 cKO islets through the effects on mediators of replication (i.e. decreased Ccna2, Ccnb1 and Ccnd2 activators, and increased Cdkn1a inhibitor). Adult islet Foxp1/2/4 cKO beta cells secrete insulin normally while the remaining alpha cells have impaired glucagon secretion. Conclusions/interpretation Collectively, these findings reveal an important role for the FOXP1, 2, and 4 proteins in governing postnatal alpha cell expansion and function. -
Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods, 49(6), 2204-2218. doi:10.3758/s13428-017-0852-3.
Abstract
Perceptual information is important for the meaning of nouns. We present modality exclusivity norms for 485 Dutch nouns rated on visual, auditory, haptic, gustatory, and olfactory associations. We found these nouns are highly multimodal. They were rated most dominant in vision, and least in olfaction. A factor analysis identified two main dimensions: one loaded strongly on olfaction and gustation (reflecting joint involvement in flavor), and a second loaded strongly on vision and touch (reflecting joint involvement in manipulable objects). In a second study, we validated the ratings with similarity judgments. As expected, words from the same dominant modality were rated more similar than words from different dominant modalities; but – more importantly – this effect was enhanced when word pairs had high modality strength ratings. We further demonstrated the utility of our ratings by investigating whether perceptual modalities are differentially experienced in space, in a third study. Nouns were categorized into their dominant modality and used in a lexical decision experiment where the spatial position of words was either in proximal or distal space. We found words dominant in olfaction were processed faster in proximal than distal space compared to the other modalities, suggesting olfactory information is mentally simulated as “close” to the body. Finally, we collected ratings of emotion (valence, dominance, and arousal) to assess its role in perceptual space simulation, but the valence did not explain the data. So, words are processed differently depending on their perceptual associations, and strength of association is captured by modality exclusivity ratings.Additional information
13428_2017_852_MOESM1_ESM.xlsx -
Stanojevic, M., & Alhama, R. G. (2017). Neural discontinuous constituency parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 1666-1676). Association for Computational Linguistics.
Abstract
One of the most pressing issues in dis-
continuous constituency transition-based
parsing is that the relevant information for
parsing decisions could be located in any
part of the stack or the buffer. In this pa-
per, we propose a solution to this prob-
lem by replacing the structured percep-
tron model with a recursive neural model
that computes a global representation of
the configuration, therefore allowing even
the most remote parts of the configura-
tion to influence the parsing decisions. We
also provide a detailed analysis of how
this representation should be built out of
sub-representations of its core elements
(words, trees and stack). Additionally, we
investigate how different types of swap or-
acles influence the results. Our model is
the first neural discontinuous constituency
parser, and it outperforms all the previ-
ously published models on three out of
four datasets while on the fourth it obtains
second place by a tiny difference.Additional information
http://aclweb.org/anthology/D17-1174 -
Stergiakouli, E., Martin, J., Hamshere, M. L., Heron, J., St Pourcain, B., Timpson, N. J., Thapar, A., & Smith, G. D. (2017). Association between polygenic risk scores for attention-deficit hyperactivity disorder and educational and cognitive outcomes in the general population. International Journal of Epidemiology, 46(2), 421-428. doi:10.1093/ije/dyw216.
Abstract
Background: Children with a diagnosis of attention-deficit hyperactivity disorder (ADHD) have lower cognitive ability and are at risk of adverse educational outcomes; ADHD genetic risks have been found to predict childhood cognitive ability and other neurodevelopmental traits in the general population; thus genetic risks might plausibly also contribute to cognitive ability later in development and to educational underachievement.
Methods: We generated ADHD polygenic risk scores in the Avon Longitudinal Study of Parents and Children participants (maximum N: 6928 children and 7280 mothers) based on the results of a discovery clinical sample, a genome-wide association study of 727 cases with ADHD diagnosis and 5081 controls. We tested if ADHD polygenic risk scores were associated with educational outcomes and IQ in adolescents and their mothers.
Results: High ADHD polygenic scores in adolescents were associated with worse educational outcomes at Key Stage 3 [national tests conducted at age 13–14 years; β = −1.4 (−2.0 to −0.8), P = 2.3 × 10−6), at General Certificate of Secondary Education exams at age 15–16 years (β = −4.0 (−6.1 to −1.9), P = 1.8 × 10−4], reduced odds of sitting Key Stage 5 examinations at age 16–18 years [odds ratio (OR) = 0.90 (0.88 to 0.97), P = 0.001] and lower IQ scores at age 15.5 [β = −0.8 (−1.2 to −0.4), P = 2.4 × 10−4]. Moreover, maternal ADHD polygenic scores were associated with lower maternal educational achievement [β = −0.09 (−0.10 to −0.06), P = 0.005] and lower maternal IQ [β = −0.6 (−1.2 to −0.1), P = 0.03].
Conclusions: ADHD diagnosis risk alleles impact on functional outcomes in two generations (mother and child) and likely have intergenerational environmental effects. -
Stergiakouli, E., Martin, J., Hamshere, M. L., Langley, K., Evans, D. M., St Pourcain, B., Timpson, N. J., Owen, M. J., O'Donovan, M., Thapar, A., & Davey Smith, G. (2015). Shared Genetic Influences Between Attention-Deficit/Hyperactivity Disorder (ADHD) Traits in Children and Clinical ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 54(4), 322-327. doi:10.1016/j.jaac.2015.01.010.
-
Stergiakouli, E., Smith, G. D., Martin, J., Skuse, D. H., Viechtbauer, W., Ring, S. M., Ronald, A., Evans, D. E., Fisher, S. E., Thapar, A., & St Pourcain, B. (2017). Shared genetic influences between dimensional ASD and ADHD symptoms during child and adolescent development. Molecular Autism, 8: 18. doi:10.1186/s13229-017-0131-2.
Abstract
Background: Shared genetic influences between attention-deficit/hyperactivity disorder (ADHD) symptoms and
autism spectrum disorder (ASD) symptoms have been reported. Cross-trait genetic relationships are, however,
subject to dynamic changes during development. We investigated the continuity of genetic overlap between ASD
and ADHD symptoms in a general population sample during childhood and adolescence. We also studied uni- and
cross-dimensional trait-disorder links with respect to genetic ADHD and ASD risk.
Methods: Social-communication difficulties (N ≤ 5551, Social and Communication Disorders Checklist, SCDC) and
combined hyperactive-impulsive/inattentive ADHD symptoms (N ≤ 5678, Strengths and Difficulties Questionnaire,
SDQ-ADHD) were repeatedly measured in a UK birth cohort (ALSPAC, age 7 to 17 years). Genome-wide summary
statistics on clinical ASD (5305 cases; 5305 pseudo-controls) and ADHD (4163 cases; 12,040 controls/pseudo-controls)
were available from the Psychiatric Genomics Consortium. Genetic trait variances and genetic overlap between
phenotypes were estimated using genome-wide data.
Results: In the general population, genetic influences for SCDC and SDQ-ADHD scores were shared throughout
development. Genetic correlations across traits reached a similar strength and magnitude (cross-trait rg ≤ 1,
pmin = 3 × 10−4) as those between repeated measures of the same trait (within-trait rg ≤ 0.94, pmin = 7 × 10−4).
Shared genetic influences between traits, especially during later adolescence, may implicate variants in K-RAS signalling
upregulated genes (p-meta = 6.4 × 10−4).
Uni-dimensionally, each population-based trait mapped to the expected behavioural continuum: risk-increasing alleles
for clinical ADHD were persistently associated with SDQ-ADHD scores throughout development (marginal regression
R2 = 0.084%). An age-specific genetic overlap between clinical ASD and social-communication difficulties during
childhood was also shown, as per previous reports. Cross-dimensionally, however, neither SCDC nor SDQ-ADHD scores
were linked to genetic risk for disorder.
Conclusions: In the general population, genetic aetiologies between social-communication difficulties and ADHD
symptoms are shared throughout child and adolescent development and may implicate similar biological pathways
that co-vary during development. Within both the ASD and the ADHD dimension, population-based traits are also linked
to clinical disorder, although much larger clinical discovery samples are required to reliably detect cross-dimensional
trait-disorder relationships. -
Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2017). Second language attainment and first language attrition: The case of VOT in immersed Dutch–German late bilinguals. Second Language Research, 33(4), 483-518. doi:10.1177/0267658317704261.
Abstract
Speech of late bilinguals has frequently been described in terms of cross-linguistic influence (CLI) from the native language (L1) to the second language (L2), but CLI from the L2 to the L1 has received relatively little attention. This article addresses L2 attainment and L1 attrition in voicing systems through measures of voice onset time (VOT) in two groups of Dutch–German late bilinguals in the Netherlands. One group comprises native speakers of Dutch and the other group comprises native speakers of German, and the two groups further differ in their degree of L2 immersion. The L1-German–L2-Dutch bilinguals (N = 23) are exposed to their L2 at home and outside the home, and the L1-Dutch–L2-German bilinguals (N = 18) are only exposed to their L2 at home. We tested L2 attainment by comparing the bilinguals’ L2 to the other bilinguals’ L1, and L1 attrition by comparing the bilinguals’ L1 to Dutch monolinguals (N = 29) and German monolinguals (N = 27). Our findings indicate that complete L2 immersion may be advantageous in L2 acquisition, but at the same time it may cause L1 phonetic attrition. We discuss how the results match the predictions made by Flege’s Speech Learning Model and explore how far bilinguals’ success in acquiring L2 VOT and maintaining L1 VOT depends on the immersion context, articulatory constraints and the risk of sounding foreign accented. -
Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.
Abstract
Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios. -
Sumer, B., Perniss, P. M., & Ozyurek, A. (2017). A first study on the development of spatial viewpoint in sign language acquisition: The case of Turkish Sign Language. In F. N. Ketrez, A. C. Kuntay, S. Ozcalıskan, & A. Ozyurek (
Eds. ), Social Environment and Cognition in Language Development: Studies in Honor of Ayhan Aksu-Koc (pp. 223-240). Amsterdam: John Benjamins. doi:10.1075/tilar.21.14sum.Abstract
The current study examines, for the first time, the viewpoint preferences of signing children in expressing spatial relations that require imposing a viewpoint (left-right, front-behind). We elicited spatial descriptions from deaf children (4–9 years of age) acquiring Turkish Sign Language (TİD) natively from their deaf parents and from adult native signers of TİD. Adults produced these spatial descriptions from their own viewpoint and from that of their addressee depending on whether the objects were located on the lateral or the sagittal axis. TİD-acquiring children, on the other hand, described all spatial configurations from their own viewpoint. Differences were also found between children and adults in the type of linguistic devices and how they are used to express such spatial relations. -
Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Sumer, B., Grabitz, C., & Küntay, A. (2017). Early produced signs are iconic: Evidence from Turkish Sign Language. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (
Eds. ), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3273-3278). Austin, TX: Cognitive Science Society.Abstract
Motivated form-meaning mappings are pervasive in sign languages, and iconicity has recently been shown to facilitate sign learning from early on. This study investigated the role of iconicity for language acquisition in Turkish Sign Language (TID). Participants were 43 signing children (aged 10 to 45 months) of deaf parents. Sign production ability was recorded using the adapted version of MacArthur Bates Communicative Developmental Inventory (CDI) consisting of 500 items for TID. Iconicity and familiarity ratings for a subset of 104 signs were available. Our results revealed that the iconicity of a sign was positively correlated with the percentage of children producing a sign and that iconicity significantly predicted the percentage of children producing a sign, independent of familiarity or phonological complexity. Our results are consistent with previous findings on sign language acquisition and provide further support for the facilitating effect of iconic form-meaning mappings in sign learning.Additional information
https://mindmodeling.org/cogsci2017/papers/0619/paper0619.pdf -
Sweegers, C. C. G., Takashima, A., Fernández, G., & Talamini, L. M. (2015). Neural mechanisms supporting the extraction of general knowledge across episodic memories. NeuroImage, 87, 138-146. doi:10.1016/j.neuroimage.2013.10.063.
Abstract
General knowledge acquisition entails the extraction of statistical regularities from the environment. At high levels of complexity, this may involve the extraction, and consolidation, of associative regularities across event memories. The underlying neural mechanisms would likely involve a hippocampo-neocortical dialog, as proposed previously for system-level consolidation. To test these hypotheses, we assessed possible differences in consolidation between associative memories containing cross-episodic regularities and unique associative memories. Subjects learned face–location associations, half of which responded to complex regularities regarding the combination of facial features and locations, whereas the other half did not. Importantly, regularities could only be extracted over hippocampus-encoded, associative aspects of the items. Memory was assessed both immediately after encoding and 48 h later, under fMRI acquisition. Our results suggest that processes related to system-level reorganization occur preferentially for regular associations across episodes. Moreover, the build-up of general knowledge regarding regular associations appears to involve the coordinated activity of the hippocampus and mediofrontal regions. The putative cross-talk between these two regions might support a mechanism for regularity extraction. These findings suggest that the consolidation of cross-episodic regularities may be a key mechanism underlying general knowledge acquisition. -
Tachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G. and 83 moreTachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G., Männistö, S., Matchan, A., Medina-Gomez, C., Metrustry, S. J., Nag, A., Ntalla, I., Paternoster, L., Rayner, N. W., Sala, C., Scott, W. R., Shihab, H. A., Southam, L., St Pourcain, B., Traglia, M., Trajanoska, K., Zaza, G., Zhang, W., Artigas, M. S., Bansal, N., Benn, M., Chen, Z., Danecek, P., Lin, W.-Y., Locke, A., Luan, J., Manning, A. K., Mulas, A., Sidore, C., Tybjaerg-Hansen, A., Varbo, A., Zoledziewska, M., Finan, C., Hatzikotoulas, K., Hendricks, A. E., Kemp, J. P., Moayyeri, A., Panoutsopoulou, K., Szpak, M., Wilson, S. G., Boehnke, M., Cucca, F., Di Angelantonio, E., Langenberg, C., Lindgren, C., McCarthy, M. I., Morris, A. P., Nordestgaard, B. G., Scott, R. A., Tobin, M. D., Wareham, N. J., Burton, P., Chambers, J. C., Smith, G. D., Dedoussis, G., Felix, J. F., Franco, O. H., Gambaro, G., Gasparini, P., Hammond, C. J., Hofman, A., Jaddoe, V. W. V., Kleber, M., Kooner, J. S., Perola, M., Relton, C., Ring, S. M., Rivadeneira, F., Salomaa, V., Spector, T. D., Stegle, O., Toniolo, D., Uitterlinden, A. G., Barroso, I., Greenwood, C. M. T., Perry, J. R. B., Walker, B. R., Butterworth, A. S., Xue, Y., Durbin, R., Small, K. S., Soranzo, N., Timpson, N. J., & Zeggini, E. (2017). Whole-Genome Sequencing coupled to imputation discovers genetic signals for anthropometric traits. The American Journal of Human Genetics, 100(6), 865-884. doi:10.1016/j.ajhg.2017.04.014.
Abstract
Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader allelic architecture of 12 anthropometric traits associated with height, body mass, and fat distribution in up to 267,616 individuals. We report 106 genome-wide significant signals that have not been previously identified, including 9 low-frequency variants pointing to functional candidates. Of the 106 signals, 6 are in genomic regions that have not been implicated with related traits before, 28 are independent signals at previously reported regions, and 72 represent previously reported signals for a different anthropometric trait. 71% of signals reside within genes and fine mapping resolves 23 signals to one or two likely causal variants. We confirm genetic overlap between human monogenic and polygenic anthropometric traits and find signal enrichment in cis expression QTLs in relevant tissues. Our results highlight the potential of WGS strategies to enhance biologically relevant discoveries across the frequency spectrum.Additional information
http://www.sciencedirect.com/science/article/pii/S0002929717301593#appd002 -
Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.
Abstract
When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems -
Takashima, A., & Bakker, I. (2017). Memory consolidation. In H.-J. Schmid (
Ed. ), Entrenchment and the Psychology of Language Learning: How We Reorganize and Adapt Linguistic Knowledge (pp. 177-200). Berlin: De Gruyter Mouton. -
Tamaoka, K., Makioka, S., Sanders, S., & Verdonschot, R. G. (2017). www.kanjidatabase.com: A new interactive online database for psychological and linguistic research on Japanese kanji and their compound words. Psychological Research, 81(3), 696-708. doi:10.1007/s00426-016-0764-3.
Abstract
Most experimental research making use of the Japanese language has involved the 1945 officially standardized kanji (Japanese logographic characters) in the Joyo kanji list (originally announced by the Japanese government in 1981). However, this list was extensively modified in 2010: five kanji were removed and 196 kanji were added; the latest revision of the list now has a total of 2136 kanji. Using an up-to-date corpus consisting of 11 years' worth of articles printed in the Mainichi Newspaper (2000-2010), we have constructed two novel databases that can be used in psychological research using the Japanese language: (1) a database containing a wide variety of properties on the latest 2136 Joyo kanji, and (2) a novel database containing 27,950 two-kanji compound words (or jukugo). Based on these two databases, we have created an interactive website (www.kanjidatabase.com) to retrieve and store linguistic information to be used in psychological and linguistic experiments. The present paper reports the most important characteristics for the new databases, as well as their value for experimental psychological and linguistic research. -
Tan, Y., Martin, R. C., & Van Dyke, J. A. (2017). Semantic and syntactic interference in sentence comprehension: A comparison of working memory models. Frontiers in Psychology, 8: 198. doi:10.3389/fpsyg.2017.00198.
Abstract
This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed. -
Tanner, J. E., & Perlman, M. (2017). Moving beyond ‘meaning’: Gorillas combine gestures into sequences for creative display. Language & Communication, 54, 56-72. doi:10.1016/j.langcom.2016.10.006.
Abstract
The great apes produce gestures intentionally and flexibly, and sometimes they combine their gestures into sequences, producing two or more gestures in close succession. We reevaluate previous findings related to ape gesture sequences and present qualitative analysis of videotaped gorilla interaction. We present evidence that gorillas produce at least two different kinds of gesture sequences: some sequences are largely composed of gestures that depict motion in an iconic manner, typically requesting particular action by the partner; others are multimodal and contain gestures – often percussive in nature – that are performed in situations of play or display. Display sequences seem to primarily exhibit the performer’s emotional state and physical fitness but have no immediate functional goal. Analysis reveals that some gorilla play and display sequences can be 1) organized hierarchically into longer bouts and repetitions; 2) innovative and individualized, incorporating objects and environmental features; and 3) highly interactive between partners. It is illuminating to look beyond ‘meaning’ in the conventional linguistic sense and look at the possibility that characteristics of music and dance, as well as those of language, are included in the gesturing of apes. -
Tarenskeen, S., Broersma, M., & Geurts, B. (2015). Overspecification of color, pattern, and size: Salience, absoluteness, and consistency. Frontiers in Psychology, 6: 1703. doi:10.3389/fpsyg.2015.01703.
Abstract
The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of color overspecification. This indicates that although many speakers are more likely to include color than pattern (probably because color is more salient), they may also treat pattern like color due to a tendency toward consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend to refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced. -
Tekcan, A. I., Yilmaz, E., Kaya Kızılö, B., Karadöller, D. Z., Mutafoğlu, M., & Erciyes, A. (2015). Retrieval and phenomenology of autobiographical memories in blind individuals. Memory, 23(3), 329-339. doi:10.1080/09658211.2014.886702.
Abstract
Although visual imagery is argued to be an essential component of autobiographical memory, there have been surprisingly few studies on autobiographical memory processes in blind individuals, who have had no or limited visual input. The purpose of the present study was to investigate how blindness affects retrieval and phenomenology of autobiographical memories. We asked 48 congenital/early blind and 48 sighted participants to recall autobiographical memories in response to six cue words, and to fill out the Autobiographical Memory Questionnaire measuring a number of variables including imagery, belief and recollective experience associated with each memory. Blind participants retrieved fewer memories and reported higher auditory imagery at retrieval than sighted participants. Moreover, within the blind group, participants with total blindness reported higher auditory imagery than those with some light perception. Blind participants also assigned higher importance, belief and recollection ratings to their memories than sighted participants. Importantly, these group differences remained the same for recent as well as childhood memories. -
Ten Bosch, L., Boves, L., & Ernestus, M. (2015). DIANA, an end-to-end computational model of human word comprehension. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.Abstract
This paper presents DIANA, a new computational model of human speech processing. It is the first model that simulates the complete processing chain from the on-line processing of an acoustic signal to the execution of a response, including reaction times. Moreover it assumes minimal modularity. DIANA consists of three components. The activation component computes a probabilistic match between the input acoustic signal and representations in DIANA’s lexicon, resulting in a list of word hypotheses changing over time as the input unfolds. The decision component operates on this list and selects a word as soon as sufficient evidence is available. Finally, the execution component accounts for the time to execute a behavioral action. We show that DIANA well simulates the average participant in a word recognition experiment. -
Ten Bosch, L., Boves, L., Tucker, B., & Ernestus, M. (2015). DIANA: Towards computational modeling reaction times in lexical decision in North American English. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 1576-1580).
Abstract
DIANA is an end-to-end computational model of speech processing, which takes as input the speech signal, and provides as output the orthographic transcription of the stimulus, a word/non-word judgment and the associated estimated reaction time. So far, the model has only been tested for Dutch. In this paper, we extend DIANA such that it can also process North American English. The model is tested by having it simulate human participants in a large scale North American English lexical decision experiment. The simulations show that DIANA can adequately approximate the reaction times of an average participant (r = 0.45). In addition, they indicate that DIANA does not yet adequately model the cognitive processes that take place after stimulus offset. -
Ten Oever, S., Van Atteveldt, N., & Sack, A. T. (2015). Increased stimulus expectancy triggers low-frequency phase reset during restricted vigilance. Journal of Cognitive Neuroscience, 27(9), 1811-1822. doi:10.1162/jocn_a_00820.
Abstract
Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top-down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top-down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window. -
Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., Mehta, A. D., Megevand, P., Groppe, D. M., & Zion-Golumbic, E. (2017). Low-frequency cortical oscillations entrain to subthreshold rhythmic auditory stimuli. The Journal of Neuroscience, 37(19), 4903-4912. doi:10.1523/JNEUROSCI.3658-16.2017.
Abstract
Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness. -
Ten Oever, S., & Sack, A. T. (2015). Oscillatory phase shapes syllable perception. Proceedings of the National Academy of Sciences of the United States of America, 112(52), 15833-15837. doi:10.1073/pnas.1517519112.
Abstract
The role of oscillatory phase for perceptual and cognitive processes is being increasingly acknowledged. To date, little is known about the direct role of phase in categorical perception. Here we show in two separate experiments that the identification of ambiguous syllables that can either be perceived as / da/ or / ga/ is biased by the underlying oscillatory phase as measured with EEG and sensory entrainment to rhythmic stimuli. The measured phase difference in which perception is biased toward / da/ or / ga/ exactly matched the different temporal onset delays in natural audiovisual speech between mouth movements and speech sounds, which last 80 ms longer for / ga/ than for / da/. These results indicate the functional relationship between prestimulus phase and syllable identification, and signify that the origin of this phase relationship could lie in exposure and subsequent learning of unique audiovisual temporal onset differences. -
Ten Bosch, L., Boves, L., & Ernestus, M. (2017). The recognition of compounds: A computational account. In Proceedings of Interspeech 2017 (pp. 1158-1162). doi:10.21437/Interspeech.2017-1048.
Abstract
This paper investigates the processes in comprehending spoken noun-noun compounds, using data from the BALDEY database. BALDEY contains lexicality judgments and reaction times (RTs) for Dutch stimuli for which also linguistic information is included. Two different approaches are combined. The first is based on regression by Dynamic Survival Analysis, which models decisions and RTs as a consequence of the fact that a cumulative density function exceeds some threshold. The parameters of that function are estimated from the observed RT data. The second approach is based on DIANA, a process-oriented computational model of human word comprehension, which simulates the comprehension process with the acoustic stimulus as input. DIANA gives the identity and the number of the word candidates that are activated at each 10 ms time step.
Both approaches show how the processes involved in comprehending compounds change during a stimulus. Survival Analysis shows that the impact of word duration varies during the course of a stimulus. The density of word and non-word hypotheses in DIANA shows a corresponding pattern with different regimes. We show how the approaches complement each other, and discuss additional ways in which data and process models can be combined. -
Terband, H., Rodd, J., & Maas, E. (2015). Simulations of feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production in the DIVA model. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahan, J. Stuart-Smith, & J. Scobbie (
Eds. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015).Abstract
Apraxia of Speech (AOS) is a motor speech disorder whose precise nature is still poorly understood. A recent behavioural experiment featuring a noise masking paradigm suggests that AOS reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts [10]. In the present study, we set out to validate the interpretation of AOS as a feedforward impairment by means of a series of computational simulations with the DIVA model [6, 7] mimicking the behavioural experiment. Simulation results showed a larger reduction in vowel spacing and a smaller vowel dispersion in the masking condition compared to the no-masking condition for the simulated feedforward deficit, whereas the other groups showed an opposite pattern. These results mimic the patterns observed in the human data, corroborating the notion that AOS can be conceptualized as a deficit in feedforward controlAdditional information
http://www.icphs2015.info/pdfs/Papers/ICPHS0406.pdf -
Thielen, J.-W., Takashima, A., Rutters, F., Tendolkar, I., & Fernandez, G. (2015). Transient relay function of midline thalamic nuclei during long-term memory consolidation in humans. Learning & Memory, 22, 527-531. doi:10.1101/lm.038372.115.
Abstract
To test the hypothesis that thalamic midline nuclei play a transient role in memory consolidation, we reanalyzed a prospective functional MRI study, contrasting recent and progressively more remote memory retrieval. We revealed a transient thalamic connectivity increase with the hippocampus, the medial prefrontal cortex (mPFC), and a parahippocampal area, which decreased with time. In turn, mPFC-parahippocampal connectivity increased progressively. These findings support a model in which thalamic midline nuclei serve as a hub linking hippocampus, mPFC, and posterior representational areas during memory retrieval at an early (2 h) stage of consolidation, extending classical systems consolidation models by attributing a transient role to midline thalamic nuclei. -
Thompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E. and 51 moreThompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Ganjgahi, H., Garavan, H., Glahn, D. C., Grabe, H. J., Guadalupe, T., Gutman, B. A., Hashimoto, R., Hibar, D. P., Holland, D., Hoogman, M., Pol, H. E. H., Hosten, N., Jahanshad, N., Kelly, S., Kochunov, P., Kremen, W. S., Lee, P. H., Mackey, S., Martin, N. G., Mazoyer, B., McDonald, C., Medland, S. E., Morey, R. A., Nichols, T. E., Paus, T., Pausova, Z., Schmaal, L., Schumann, G., Shen, L., Sisodiya, S. M., Smit, D. J., Smoller, J. W., Stein, D. J., Stein, J. L., Toro, R., Turner, J. A., Van den Heuvel, M., Van den Heuvel, O. A., Van Erp, T. G., Van Rooij, D., Veltman, D. J., Walter, H., Wang, Y., Wardlaw, J. M., Whelan, C. D., Wright, M. J., & Ye, J. (2017). ENIGMA and the Individual: Predicting Factors that Affect the Brain in 35 Countries Worldwide. NeuroImage, 145, 389-408. doi:10.1016/j.neuroimage.2015.11.057.
-
Thompson, J. R., Minelli, C., Bowden, J., Del Greco, F. M., Gill, D., Jones, E. M., Shapland, C. Y., & Sheehan, N. A. (2017). Mendelian randomization incorporating uncertainty about pleiotropy. Statistics in Medicine, 36(29), 4627-4645. doi:10.1002/sim.7442.
Abstract
Mendelian randomization (MR) requires strong assumptions about the genetic instruments, of which the most difficult to justify relate to pleiotropy. In a two-sample MR, different methods of analysis are available if we are able to assume, M1: no pleiotropy (fixed effects meta-analysis), M2: that there may be pleiotropy but that the average pleiotropic effect is zero (random effects meta-analysis), and M3: that the average pleiotropic effect is nonzero (MR-Egger). In the latter 2 cases, we also require that the size of the pleiotropy is independent of the size of the effect on the exposure. Selecting one of these models without good reason would run the risk of misrepresenting the evidence for causality. The most conservative strategy would be to use M3 in all analyses as this makes the weakest assumptions, but such an analysis gives much less precise estimates and so should be avoided whenever stronger assumptions are credible. We consider the situation of a two-sample design when we are unsure which of these 3 pleiotropy models is appropriate. The analysis is placed within a Bayesian framework and Bayesian model averaging is used. We demonstrate that even large samples of the scale used in genome-wide meta-analysis may be insufficient to distinguish the pleiotropy models based on the data alone. Our simulations show that Bayesian model averaging provides a reasonable trade-off between bias and precision. Bayesian model averaging is recommended whenever there is uncertainty about the nature of the pleiotropyAdditional information
sim7442-sup-0001-Supplementary.pdf -
Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2015). 1- and 2-year-olds’ expectations about third-party communicative actions. Infant Behavior and Development, 39, 53-66. doi:10.1016/j.infbeh.2015.02.002.
Abstract
Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener. -
Tilot, A. K., Frazier, T. W. 2., & Eng, C. (2015). Balancing proliferation and connectivity in PTEN -associated Autism Spectrum Disorder. Neurotherapeutics, 13(3), 609-619. doi:10.1007/s13311-015-0356-8.
Abstract
Germline mutations in PTEN, which encodes a widely expressed phosphatase, was mapped to 10q23 and identified as the susceptibility gene for Cowden syndrome, characterized by macrocephaly and high risks of breast, thyroid, and other cancers. The phenotypic spectrum of PTEN mutations expanded to include autism with macrocephaly only 10 years ago. Neurological studies of patients with PTEN-associated autism spectrum disorder (ASD) show increases in cortical white matter and a distinctive cognitive profile, including delayed language development with poor working memory and processing speed. Once a germline PTEN mutation is found, and a diagnosis of phosphatase and tensin homolog (PTEN) hamartoma tumor syndrome made, the clinical outlook broadens to include higher lifetime risks for multiple cancers, beginning in childhood with thyroid cancer. First described as a tumor suppressor, PTEN is a major negative regulator of the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin (mTOR) signaling pathway—controlling growth, protein synthesis, and proliferation. This canonical function combines with less well-understood mechanisms to influence synaptic plasticity and neuronal cytoarchitecture. Several excellent mouse models of Pten loss or dysfunction link these neural functions to autism-like behavioral abnormalities, such as altered sociability, repetitive behaviors, and phenotypes like anxiety that are often associated with ASD in humans. These models also show the promise of mTOR inhibitors as therapeutic agents capable of reversing phenotypes ranging from overgrowth to low social behavior. Based on these findings, therapeutic options for patients with PTEN hamartoma tumor syndrome and ASD are coming into view, even as new discoveries in PTEN biology add complexity to our understanding of this master regulator.Additional information
13311_2015_356_MOESM1_ESM.pdf -
Todorovic, A., Schoffelen, J.-M., van Ede, F., Maris, E., & de Lange, F. P. (2015). Temporal expectation and attention jointly modulate auditory oscillatory activity in the beta band. PLoS One, 10(3): e0120288. doi:10.1371/journal.pone.0120288.
Abstract
The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected) tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing. -
Torreira, F., Bögels, S., & Levinson, S. C. (2015). Breathing for answering: The time course of response planning in conversation. Frontiers in Psychology, 6: 284. doi:10.3389/fpsyg.2015.00284.
Abstract
In this study, we investigate the timing of pre-answer inbreaths in order to shed light on the time course of response planning and execution in conversational turn-taking. Using acoustic and inductive plethysmography recordings of seven dyadic conversations in Dutch, we show that pre-answer inbreaths in conversation typically begin briefly after the end of questions. We also show that the presence of a pre-answer inbreath usually co-occurs with substantially delayed answers, with a modal latency of 576 ms vs. 100 ms for answers not preceded by an inbreath. Based on previously reported minimal latencies for internal intercostal activation and the production of speech sounds, we propose that vocal responses, either in the form of a pre-utterance inbreath or of speech proper when an inbreath is not produced, are typically launched in reaction to information present in the last portion of the interlocutor’s turn. We also show that short responses are usually made on residual breath, while longer responses are more often preceded by an inbreath. This relation of inbreaths to answer length suggests that by the time an inbreath is launched, typically during the last few hundred milliseconds of the question, the length of the answer is often prepared to some extent. Together, our findings are consistent with a two-stage model of response planning in conversational turn-taking: early planning of content often carried out in overlap with the incoming turn, and late launching of articulation based on the identification of turn-final cues -
Torreira, F. (2015). Melodic alternations in Spanish. In The Scottish Consortium for ICPhS 2015 (
Ed. ), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) (pp. 946.1-5). Glasgow, UK: The University of Glasgow. Retrieved from http://www.icphs2015.info/pdfs/Papers/ICPHS0946.pdf.Abstract
This article describes how the tonal elements of two common Spanish intonation contours –the falling statement and the low-rising-falling request– align with the segmental string in broad-focus utterances differing in number of prosodic words. Using an imitation-and-completion task, we show that (i) the last stressed syllable of the utterance, traditionally viewed as carrying the ‘nuclear’ accent, associates with either a high or a low tonal element depending on phrase length (ii) that certain tonal elements can be realized or omitted depending on the availability of specific metrical positions in their intonational phrase, and (iii) that the high tonal element of the request contour associates with either a stressed syllable or an intonational phrase edge depending on phrase length. On the basis of these facts, and in contrast to previous descriptions of Spanish intonation relying on obligatory and constant nuclear contours (e.g., L* L% for all neutral statements), we argue for a less constrained intonational morphology involving tonal units linked to the segmental string via contour-specific principles. -
Torreira, F., & Valtersson, E. (2015). Phonetic and visual cues to questionhood in French conversation. Phonetica, 72, 20-42. doi:10.1159/000381723.
Abstract
We investigate the extent to which French polar questions and continuation statements, two types of utterances with similar morphosyntactic and intonational forms but different pragmatic functions, can be distinguished in conversational data based on phonetic and visual bodily information. We show that the two utterance types can be distinguished well over chance level by automatic classification models including several phonetic and visual cues. We also show that a considerable amount of relevant phonetic and visual information is present before the last portion of the utterances, potentially assisting early speech act recognition by addressees. These findings indicate that bottom-up phonetic and visual cues may play an important role during the production and recognition of speech acts alongside top-down contextual information. -
Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2015). ERP indices of situated reference in visual contexts. In D. Noelle, R. Dale, A. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (
Eds. ), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2422-2427). Austin: Cognitive Science Society.Abstract
Violations of the maxims of Quantity occur when utterances provide more (over-specified) or less (under-specified) information than strictly required for referent identification. While behavioural datasuggest that under-specified expressions lead to comprehension difficulty and communicative failure, there is no consensus as to whether over-specified expressions are also detrimental to comprehension. In this study we shed light on this debate, providing neurophysiological evidence supporting the view that extra information facilitates comprehension. We further present novel evidence that referential failure due to under-specification is qualitatively different from explicit cases of referential failure, when no matching referential candidate is available in the context. -
Travis, C. E., Cacoullos, R. T., & Kidd, E. (2017). Cross-language priming: A view from bilingual speech. Bilingualism: Language and Cognition, 20(2), 283-298. doi:10.1017/S1366728915000127.
Abstract
In the current paper we report on a study of priming of variable Spanish 1sg subject expression in spontaneous Spanish–English bilingual speech (based on the New Mexico Spanish–English Bilingual corpus, Torres Cacoullos & Travis, in preparation). We show both within- and cross-language Coreferential Subject Priming; however, cross-language priming from English to Spanish is weaker and shorter lived than within-language Spanish-to-Spanish priming, a finding that appears not to be attributable to lexical boost. Instead, interactions with subject continuity and verb type show that the strength of priming depends on co-occurring contextual features and particular [pronoun + verb] constructions, from the more lexically specific to the more schematically general. Quantitative patterns in speech thus offer insights unavailable from experimental work into the scope and locus of priming effects, suggesting that priming in bilingual discourse can serve to gauge degrees of strength of within- and cross-language associations between usage-based constructions. -
Trenite, D., Volkers, L., Strengman, E., Schippers, H. M., Perquin, W., de Haan, G. J., Gkountidi, A. O., van't Slot, R., de Graaf, S. F., Jocic-Jakubi, B., Capovilla, G., Covanis, A., Parisi, P., Veggiotti, P., Brinciotti, M., Incorpora, G., Piccioli, M., Cantonetti, L., Berkovic, S. F., Scheffer, I. E. and 5 moreTrenite, D., Volkers, L., Strengman, E., Schippers, H. M., Perquin, W., de Haan, G. J., Gkountidi, A. O., van't Slot, R., de Graaf, S. F., Jocic-Jakubi, B., Capovilla, G., Covanis, A., Parisi, P., Veggiotti, P., Brinciotti, M., Incorpora, G., Piccioli, M., Cantonetti, L., Berkovic, S. F., Scheffer, I. E., Brilstra, E. H., Sonsma, A. C. M., Bader, A. J., De Kovel, C. G. F., & Koeleman, B. P. C. (2015). Clinical and genetic analysis of a family with two rare reflex epilepsies. Seizure-European Journal of Epilepsy, 29, 90-96. doi:10.1016/j.seizure.2015.03.020.
Abstract
Purpose: To determine clinical phenotypes, evolution and genetic background of a large family with a combination of two unusual forms of reflex epilepsies. Method: Phenotyping was performed in eighteen family members (10 F, 8 M) including standardized EEG recordings with intermittent photic stimulation (IPS). Genetic analyses (linkage scans, Whole Exome Sequencing (WES) and Functional studies) were performed using photoparoxysmal EEG responses (PPRs) as affection status. Results: The proband suffered from speaking induced jaw-jerks and increasing limb jerks evoked by flickering sunlight since about 50 years of age. Three of her family members had the same phenotype. Generalized PPRs were found in seven members (six above 50 years of age) with myoclonus during the PPR. Evolution was typical: Sensitivity to lights with migraine-like complaints around adolescence, followed by jerks evoked by lights and spontaneously with dropping of objects, and strong increase of light sensitivity and onset of talking induced jaw jerks around 50 years. Linkage analysis showed suggestive evidence for linkage to four genomic regions. All photosensitive family members shared a heterozygous R129C mutation in the SCNM1 gene that regulates splicing of voltage gated ion channels. Mutation screening of 134 unrelated PPR patients and 95 healthy controls, did not replicate these findings. Conclusion: This family presents a combination of two rare reflex epilepsies. Genetic analysis favors four genomic regions and points to a shared SCNM1 mutation that was not replicated in a general cohort of photosensitive subjects. Further genetic studies in families with similar combination of features are warranted. (C) 2015 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved. -
Trilsbeek, P., Broeder, D., Elbers, W., & Moreira, A. (2015). A sustainable archiving software solution for The Language Archive. In Proceedings of the 4th International Conference on Language Documentation and Conservation (ICLDC).
Additional information
http://hdl.handle.net/10125/25288 -
Troncoso Ruiz, A., & Elordieta, G. (2017). Prosodic accommodation and salience: The nuclear contours of Andalusian Spanish speakers in Asturias. Loquens, 4(2): e403. doi:10.3989/loquens.2017.043.
Abstract
This study investigates the convergent accommodating behaviour of Andalusian speakers (Southern Spain) relocated in Asturias (Northern Spain), a context of dialect contact, in terms of intonation. We aim to address three research questions: (1) is there evidence for accommodation? (2) Do social factors determine accommodation? And (3) does salience predict which prosodic features are more likely to be adopted by relocated speakers? We elaborated a corpus of spontaneous speech including an experimental group of Andalusian speakers in Asturias and two control groups of Asturian and Andalusian people. The relocated Andalusians were interviewed by a speaker of Andalusian Spanish and a speaker of Amestáu (hybrid variety between Asturian and Spanish), and their intonation patterns were compared to the ones found in the control populations. During the interviews, we also gathered data about how integrated these relocated speakers were in Asturias. We found that all participants show a tendency towards convergent accommodation to the Amestáu interlocutor, producing late falling pitch contours in nuclear position in declaratives and final falling contours in absolute interrogatives. The most integrated speakers in the Asturian community are the ones showing more features of the varieties spoken in the area. Finally, the most salient features to an Andalusian ear—the presence of final falls in Asturian, Asturian Spanish and Amestáu absolute interrogatives as opposed to final rises in Andalusian and Standard Peninsular Spanish—were the ones showing the highest percentages of adoption in relocated speakers. We could conclude, then, that the most salient prosodic features are acquired more easily by the most integrated relocated speakers. -
Trujillo, J. P., Gerrits, N. J. H. M., Vriend, C., Berendse, H. W., van den Heuvel, O. A., & van der Werf, Y. (2015). Impaired planning in Parkinson's disease is reflected by reduced brain activation and connectivity. Human Brain Mapping, 36(9), 3703-3715. doi:10.1002/hbm.22873.
-
Trujillo, J. P., Gerrits, N. J. H. M., Veltman, D. J., Berendse, H. W., van der Werf, Y. D., & van den Heuvel, O. A. (2015). Reduced neural connectivity but increased task-related activity during working memory in de novo Parkinson patients. Human Brain Mapping, 36(4), 1554-1566. doi:10.1002/hbm.22723.
Abstract
Objective: Patients with Parkinson's disease (PD) often suffer from impairments in executive functions, such as working memory deficits. It is widely held that dopamine depletion in the striatum contributes to these impairments through decreased activity and connectivity between task-related brain networks. We investigated this hypothesis by studying task-related network activity and connectivity within a sample of de novo patients with PD, versus healthy controls, during a visuospatial working memory task. Methods: Sixteen de novo PD patients and 35 matched healthy controls performed a visuospatial n-back task while we measured their behavioral performance and neural activity using functional magnetic resonance imaging. We constructed regions-of-interest in the bilateral inferior parietal cortex (IPC), bilateral dorsolateral prefrontal cortex (DLPFC), and bilateral caudate nucleus to investigate group differences in task-related activity. We studied network connectivity by assessing the functional connectivity of the bilateral DLPFC and by assessing effective connectivity within the frontoparietal and the frontostriatal networks. Results: PD patients, compared with controls, showed trend-significantly decreased task accuracy, significantly increased task-related activity in the left DLPFC and a trend-significant increase in activity of the right DLPFC, left caudate nucleus, and left IPC. Furthermore, we found reduced functional connectivity of the DLPFC with other task-related regions, such as the inferior and superior frontal gyri, in the PD group, and group differences in effective connectivity within the frontoparietal network. Interpretation: These findings suggest that the increase in working memory-related brain activity in PD patients is compensatory to maintain behavioral performance in the presence of network deficits. Hum Brain Mapp 36:1554-1566, 2015. (c) 2015 Wiley Periodicals, Inc. -
Tsoukala, C., Frank, S. L., & Broersma, M. (2017). “He's pregnant": Simulating the confusing case of gender pronoun errors in L2 English. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society (CogSci 2017) (pp. 3392-3397). Austin, TX, USA: Cognitive Science Society.
Abstract
Even advanced Spanish speakers of second language English tend to confuse the pronouns ‘he’ and ‘she’, often without even noticing their mistake (Lahoz, 1991). A study by AntónMéndez (2010) has indicated that a possible reason for this error is the fact that Spanish is a pro-drop language. In order to test this hypothesis, we used an extension of Dual-path (Chang, 2002), a computational cognitive model of sentence production, to simulate two models of bilingual speech production of second language English. One model had Spanish (ES) as a native language, whereas the other learned a Spanish-like language that used the pronoun at all times (non-pro-drop Spanish, NPD_ES). When tested on L2 English sentences, the bilingual pro-drop Spanish model produced significantly more gender pronoun errors, confirming that pronoun dropping could indeed be responsible for the gender confusion in natural language use as well.Additional information
https://mindmodeling.org/cogsci2017/papers/0639/ -
Tsuji, S., Mazuka, R., Cristia, A., & Fikkert, P. (2015). Even at 4 months, a labial is a good enough coronal, but not vice versa. Cognition, 134, 252-256. doi:10.1016/j.cognition.2014.10.009.
Abstract
Numerous studies have revealed an asymmetry tied to the perception of coronal place of articulation: participants accept a labial mispronunciation of a coronal target, but not vice versa. Whether or not this asymmetry is based on language-general properties or arises from language-specific experience has been a matter of debate. The current study suggests a bias of the first type by documenting an early, cross-linguistic asymmetry related to coronal place of articulation. Japanese and Dutch 4- and 6-month-old infants showed evidence of discrimination if they were habituated to a labial and then tested on a coronal sequence, but not vice versa. This finding has important implications for both phonological theories and infant speech perception researchAdditional information
Tsuji_etal_suppl_2014.xlsx -
Tsuji, S., Fikkert, P., Minagawa, Y., Dupoux, E., Filippin, L., Versteegh, M., Hagoort, P., & Cristia, A. (2017). The more, the better? Behavioral and neural correlates of frequent and infrequent vowel exposure. Developmental Psychobiology, 59, 603-612. doi:10.1002/dev.21534.
Abstract
A central assumption in the perceptual attunement literature holds that exposure to a speech sound contrast leads to improvement in native speech sound processing. However, whether the amount of exposure matters for this process has not been put to a direct test. We elucidated indicators of frequency-dependent perceptual attunement by comparing 5–8-month-old Dutch infants’ discrimination of tokens containing a highly frequent [hɪt-he:t] and a highly infrequent [hʏt-hø:t] native vowel contrast as well as a non-native [hɛt-hæt] vowel contrast in a behavioral visual habituation paradigm (Experiment 1). Infants discriminated both native contrasts similarly well, but did not discriminate the non-native contrast. We sought further evidence for subtle differences in the processing of the two native contrasts using near-infrared spectroscopy and a within-participant design (Experiment 2). The neuroimaging data did not provide additional evidence that responses to native contrasts are modulated by frequency of exposure. These results suggest that even large differences in exposure to a native contrast may not directly translate to behavioral and neural indicators of perceptual attunement, raising the possibility that frequency of exposure does not influence improvements in discriminating native contrasts.Additional information
dev21534-sup-0001-SuppInfo-S1.docx -
Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2017). Broca’s region: A causal role in implicit processing of grammars with crossed non-adjacent dependencies. Cognition, 164, 188-198. doi:10.1016/j.cognition.2017.03.010.
Abstract
Non-adjacent dependencies are challenging for the language learning machinery and are acquired later than adjacent dependencies. In this transcranial magnetic stimulation (TMS) study, we show that participants successfully discriminated between grammatical and non-grammatical sequences after having implicitly acquired an artificial language with crossed non-adjacent dependencies. Subsequent to transcranial magnetic stimulation of Broca’s region, discrimination was impaired compared to when a language-irrelevant control region (vertex) was stimulated. These results support the view that Broca’s region is engaged in structured sequence processing and extend previous functional neuroimaging results on artificial grammar learning (AGL) in two directions: first, the results establish that Broca’s region is a causal component in the processing of non-adjacent dependencies, and second, they show that implicit processing of non-adjacent dependencies engages Broca’s region. Since patients with lesions in Broca’s region do not always show grammatical processing difficulties, the result that Broca’s region is causally linked to processing of non-adjacent dependencies is a step towards clarification of the exact nature of syntactic deficits caused by lesions or perturbation to Broca’s region. Our findings are consistent with previous results and support a role for Broca’s region in general structured sequence processing, rather than a specific role for the processing of hierarchically organized sentence structure. -
Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2017). A common variant of the CNTNAP2 gene is associated with structural variation in the left superior occipital gyrus. Brain and Language, 172, 16-21. doi:10.1016/j.bandl.2016.02.003.
Abstract
The CNTNAP2 gene encodes a cell-adhesion molecule that influences the properties of neural networks and the morphology and density of neurons and glial cells. Previous studies have shown association of CNTNAP2 variants with language-related phenotypes in health and disease. Here, we report associations of a common CNTNAP2 polymorphism (rs7794745) with variation in grey matter in a region in the dorsal visual stream. We tried to replicate an earlier study on 314 subjects by Tan and colleagues (2010), but now in a substantially larger group of more than 1700 subjects. Carriers of the T allele showed reduced grey matter volume in left superior occipital gyrus, while we did not replicate associations with grey matter volume in other regions identified by Tan et al (2010). Our work illustrates the importance of independent replication in neuroimaging genetic studies of language-related candidate genes. -
Udden, J., & Schoffelen, J.-M. (2015). Mother of all Unification Studies (MOUS). In A. E. Konopka (
Ed. ), Research Report 2013 | 2014 (pp. 21-22). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2236748.
Share this page