Segmentation of words from song in 10-month-old infants
Infant-directed songs are rhythmic with exaggerated intonation. These properties promote word segmentation from speech (Jusczyk et al 1999, Johnson & Jusczyk 2001, Mannel & Friederici 2013). Does that mean that infants are particularly good in segmenting words from songs? We measured EEG while we exposed forty 10-month-old Dutch infants to songs and stories, in each of which a word was repeated across phrases. Segmentation of the repeated word was inferred from the ERP familiarity effect (Kooijman et al 2005, Junge et al 2014), comparing the last two presentations to the first two presentations of the repeated word. Contrary to earlier work investigating speech only (Junge et al 2014), in our data there was no significant ERP familiarity effect within the speech condition, suggesting our infants did not segment the words from speech. However, in the song condition we identified a positive shift in the ERP, 300-900 ms after onset of the repeated word, over left frontal electrodes (p<.05 corrected for multiple comparisons). This suggests that the infants are able to segment words from song. Our failure to identify segmentation from speech might be due to the fact that our speech material was less child-directed than in the study of Junge and colleagues (see Floccia et al 2016). Our results suggest that the brain of 10-month-old infants uses the rhythmic and melodic properties of song to detect salient events and to segment words from the continuous auditory input.
Publication type
PosterPublication date
2016
Share this page