Displaying 1 - 30 of 30
-
Çetinçelik, M., Jordan Barros, A., Rowland, C. F., & Snijders, T. M. (2024). Infants’ neural tracking of multimodal speech and links with language development. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Jordan-Barros, A., Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). The role of visual speech cues on infants’ neural tracking of speech. Poster presented at the 24th International Congress of Infant Studies (ICIS 2024), Glasgow, Scotland.
-
Van der Klis, A., Çetinçelik, M., Menn, K., Snijders, T. M., & Junge, C. (2024). Neural tracking of nursery rhymes: Development and relations with vocabulary outcomes. Poster presented at the 6th Workshop on Infant Language Development (WILD 2024), Lisbon, Portugal.
-
Jordan-Barros, A., Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Do visual speech cues facilitate infants’ neural tracking of speech?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Snijders, T. M., Çetinçelik, M., Jordan-Barros, A., & Rowland, C. F. (2023). The effect of visual mouth cues on neural tracking of speech in 10- month-old infants. Poster presented at the 21st International Multisensory Research Forum (IMRF 2023), Brussels, Belgium.
-
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2022). The effects of the speaker’s eye gaze on infants’ speech processing and word segmentation. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Jordan, A., Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2022). The role of audio-visual cues on infants’ cortical speech tracking and word recognition. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Jordan, A., Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2022). The role of audio-visual cues on infants’ cortical speech tracking and word segmentation. Poster presented at the 7th International Conference on Infant and Early Child Development (LCICD 2022), Lancaster, UK.
-
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2021). The effects of speaker’s eye gaze on infants’ speech processing and word segmentation. Poster presented at the 6th Lancaster Conference on Infant and Early Child Development (LCICD 2021), online.
-
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2021). The effects of speaker’s eye gaze on infants’ speech processing and word segmentation. Poster presented at the Budapest CEU Conference on Cognitive Development (BCCCD21), online.
-
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2021). The effects of speaker’s eye gaze on infants’ speech processing and word segmentation. Poster presented at the 13th Annual Meeting of the Society for the Neurobiology of Language (SNL 2021), online.
-
Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2020). The effects of eye gaze on infants’ language learning: A systematic review. Poster presented at the Virtual International Congress of Infant Studies (vICIS 2020), Glasgow, UK.
-
Menn, K., Ward, E., Brauckmann, R., Van den Boomen, C., Kemner, C., Buitelaar, J., Hunnius, S., & Snijders, T. M. (2020). Relating neural entrainment to speech to later development of language and autism symptoms in infants with high likelihood of ASD. Poster presented at the Virtual International Congress of Infant Studies (vICIS 2020), Glasgow, UK.
-
Snijders, T. M. (2020). Individual variability in infants’ cortical tracking of speech rhythm relates to their word segmentation performance. Poster presented at the Twelfth Annual (Virtual) Meeting of the Society for the Neurobiology of Language (SNL 2020).
-
Snijders, T. M. (2020). Infants’ cortical tracking of speech rhythm at 7.5 months relates to their word segmentation performance at 9 months. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
-
Visser, F. M. H. G., Rommers, L., Arana, S., Kösem, A., & Snijders, T. M. (2020). Rhythm-based word segmentation and its relation to speech-brain coherence in Dutch 9-month-olds. Poster presented at Many Paths to Language (MPaL 2020), online.
-
Vissers, F. M. H. G., Rommers, L., Arana, S., Kösem, A., & Snijders, T. M. (2020). Rhythm-based word segmentation and its relation to speech-brain coherence in 9-month-old infants. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
-
Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2019). Segmenting clauses in song and speech – absence of evidence for easier segmentation of song. Poster presented at the 4th Workshop on Infant Language Development (WILD 2019), Potsdam, Germany.
-
Van den Boomen, C., Fahrenfort, J. J., Snijders, T. M., & Kemner, C. (2019). Slow segmentation of faces in Autism Spectrum Disorder. Poster presented at the 19th Annual Meeting of the Vision Sciences Society (VSS), St. Pete Beach, Florida, USA.
-
Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2017). Infants' sensitivity to rhyme in songs. Poster presented at Many Paths to Language (MPaL), Nijmegen, The Netherlands.
-
Snijders, T. M., Benders, T., Junge, C., Haegens, S., & Fikkert, P. (2017). Babies and beeps – relating infants’ sensitivity to rhythm to their speech segmentation ability. Poster presented at the 3rd Workshop on Infant Language Development (WILD 2017), Bilbao, Spain.
-
Snijders, T. M., Benders, T., Junge, C., Haegens, S., & Fikkert, P. (2017). Relating infants’ sensitivity to rhythm at 7.5 months to their speech segmentation ability at 9 months. Poster presented at Many Paths to Language (MPaL), Nijmegen, The Netherlands.
-
Snijders, T. M., Schmits, I., & Haegens, S. (2017). Babies and beeps - entrainment to rhythm and temporal prediction in 7.5-month-old infants. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
-
Arana, S., Rommers, L., Hagoort, P., Snijders, T. M., & Kösem, A. (2016). The role of entrained oscillations during foreign language listening. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
-
Snijders, T. M., Benders, T., & Fikkert, P. (2016). Segmentation of words from song in 10-month-old infants. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Infant-directed songs are rhythmic with exaggerated intonation. These properties promote word segmentation from speech (Jusczyk et al 1999, Johnson & Jusczyk 2001, Mannel & Friederici 2013). Does that mean that infants are particularly good in segmenting words from songs? We measured EEG while we exposed forty 10-month-old Dutch infants to songs and stories, in each of which a word was repeated across phrases. Segmentation of the repeated word was inferred from the ERP familiarity effect (Kooijman et al 2005, Junge et al 2014), comparing the last two presentations to the first two presentations of the repeated word. Contrary to earlier work investigating speech only (Junge et al 2014), in our data there was no significant ERP familiarity effect within the speech condition, suggesting our infants did not segment the words from speech. However, in the song condition we identified a positive shift in the ERP, 300-900 ms after onset of the repeated word, over left frontal electrodes (p<.05 corrected for multiple comparisons). This suggests that the infants are able to segment words from song. Our failure to identify segmentation from speech might be due to the fact that our speech material was less child-directed than in the study of Junge and colleagues (see Floccia et al 2016). Our results suggest that the brain of 10-month-old infants uses the rhythmic and melodic properties of song to detect salient events and to segment words from the continuous auditory input. -
Hahn, L. E., Benders, T., & Snijders, T. M. (2015). Infants’ sensitivity to rhyme in songs. Poster presented at the 2nd Workshop on Infant Language Development (WILD 2015), Stockholm, Sweden.
-
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The neural integration of pointing gesture and speech in a visual context: An fMRI study. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
Additional information
http://www.neurolang.org/programs/SNL2015_Abstracts.pdf -
Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2015). A common variant of the CNTNAP2 gene is associated with structural variation in the dorsal visual stream and language-related regions of the right hemisphere. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
-
Dimitrova, D. V., Snijders, T. M., & Hagoort, P. (2014). Neurobiological attention mechanisms of syntactic and prosodic focusing in spoken language. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
IIn spoken utterances important or new information is
often linguistically marked, for instance by prosody
or syntax. Such highlighting prevents listeners from
skipping over relevant information. Linguistic cues like
pitch accents lead to a more elaborate processing of
important information (Wang et al., 2011). In a recent
fMRI study, Kristensen et al. (2013) have shown that the
neurobiological signature of pitch accents is linked to the
domain-general attention network. This network includes
the superior and inferior parietal cortex. It is an open
question whether non-prosodic markers of focus (i.e. the
important/new information) function similarly on the
neurobiological level, that is by recruiting the domaingeneral
attention network. This study tried to address
this question by testing a syntactic marker of focus. The
present fMRI study investigates the processing of it-clefts,
which highlight important information syntactically,
and compares it to the processing of pitch accents, which
highlight information prosodically. We further test if
both linguistic focusing devices recruit domain-general
attention mechanisms. In the language task, participants
listened to short stories like “In the beginning of February
the final exam period was approaching. The student did
not read the lecture notes”. In the last sentence of each
story, the new information was focused either by a pitch
accent as in “He borrowed the BOOK from the library”
or by an it-cleft like “It was the book that he borrowed
from the library”. Pitch accents were pronounced without
exaggerated acoustic emphasis. Two control conditions
were included: (i) sentences with fronted focus like “The
book he borrowed from the library”, to account for word
order differences between sentences with clefts and
accents, and (ii) sentences without prosodic emphasis
like ”He borrowed the book from the library”. In the
attentional localizer task (adopted from Kristensen et al., 2013), participants listened to tones in a dichotic
listening paradigm. A cue tone was presented in one ear
and participants responded to a target tone presented
either in the same or the other ear. In line with Kristensen
et al. (2013), we found that in the localizer task cue
tones activated the right inferior parietal cortex and the
precuneus, and we found additional activations in the
right superior temporal gyrus. In the language task,
sentences with it- clefts elicited larger activations in the
left and right superior temporal gyrus as compared to
control sentences with fronted focus. For the contrast
between sentences with pitch accent vs. without pitch
accent we observed activation in the inferior parietal
lobe, this activation did however not survive multiple
comparisons correction. In sum, our findings show that
syntactic focusing constructions like it-clefts recruit
the superior temporal gyri, similarly to cue tones in
the localizer task. Highlighting focus by pitch accent
activated the parietal cortex in areas overlapping with
those reported by Kristensen et al. and with those we
found for cue tones in the localizer task. Our study
provides novel evidence that prosodic and syntactic
focusing devices likely have a distinct neurobiological
signature in spoken language comprehension.Additional information
http://www.neurolang.org/programs/SNL2014_Program_with_Abstracts.pdf -
Fonteijn, H. M., Acheson, D. J., Petersson, K. M., Segaert, K., Snijders, T. M., Udden, J., Willems, R. M., & Hagoort, P. (2014). Overlap and segregation in activation for syntax and semantics: a meta-analysis of 13 fMRI studies. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Share this page