MLD presented at the 14th Iconicity in Langauge & Literature conference

30 May 2024
Poster of Conference in Catania
This year, the members of the MLD department gave three podium presentations spanning various topics on iconicity in language.












Anita Slonimska, Alessio di Renzo, Mounika Kanakanti, Emanuela Camisi, Olga Capirci, & Asli Ozyurek 

Iconicity as a Communicative Strategy in Sign Languages: Quantitative vs. Qualitative Modulations of Constructed action for children and adults 

Two women sit at a table in front of a projection screen displaying a conclusion slide about iconicity in communication. The woman on the right, A. Slonimska, is speaking. There are snacks and water bottles on the table.

Research shows that speakers adapt not only speech but also increase the rate of iconic gestures to enhance message clarity for children. Although sign languages allow to take advantage of iconicity to a far greater extent than what is possible in spoken languages, little is known about whether signers modulate highly iconic strategies for children. In the present study, authors used automatic video pose estimation software to analyze descriptions in Italian Sign Language and investigate not only quantitative but also qualitative modulations (size & duration) of iconicity in communication with 12-year-old children.

Alessia Giulimondi

Simultaneity in iconic two-handed gestures: a communicative strategy for children

A conference room presentation shows a slide titled "Iconicity as a Fundamental Property of Language," with an image of hands forming a heart shape. Several attendees are visible in the foreground.

In this study, we provide first insights into how adults modulate number and type of information represented in their gestures with children. First, our results show that speakers use two-handed gestures to represent more units of information (semantic features) for children compared to adults. Furthermore, results suggest that while two-handed gestures are used to represent spatial relationships for both adults and children, the simultaneous addition of imagistic component is used as a strategy to increase informativeness when designing communication for children. This research expands our understanding of the use of simultaneity in two-handed co-speech gestures as a communicative strategy, supporting the hypothesis that iconicity benefits from simultaneity to increase communicative efficiency.

Ezgi Mamus

Gestures reveal how visual experience shapes concepts in blind and sighted individuals

In the picture, you can see Ezgi Mamus standing to the right of a large digital screen displaying a presentation with the title "How do people describe the world without sight?". Ezgi, wearing a red sweater and black jeans, is gesturing with her hands as she speaks.

To what extent experience influences conceptual representations is an ongoing debate. This pre-registered study tested whether visual experience affects how single concepts are mapped onto gestures rather than only words. Recent gesture theories claim gestures arise from sensorimotor simulations, reflecting gesturers’ experience with objects. If visuospatial and motor cues drive gesture, then visual experience may cause differences in gesture strategies. Thirty congenitally blind and 30 sighted Turkish speakers produced silent gestures for concepts from three semantic categories that rely on motor (manipulable objects) or visual (non-manipulable objects and animals) experience to different extents. We had 60 concepts in total: 20 concepts per semantic category. We coded the strategies (acting, representing, drawing, and personification) for each gesture by following Ortega and Özyürek (2020). As an ancillary measure of conceptual knowledge, participants listed features for the same concepts. As expected, blind individuals were less likely than sighted individuals to produce a gesture for non-manipulable objects and animals, but not for manipulable objects, see Table 1 for the descriptive statistics. Compared to sighted, their gestures relied less on strategies depicting visuospatial features—i.e., tracing of an object (drawing) and embodying a non-human entity (personification), see Figure 1. In the language-based task, however, the two groups differed only in the number of perceptual features listed for animals, but not the other categories. Our results suggest gesture might be driven directly by mappings of visuospatial and motoric representations onto body that are not fully accessible through listing features of concepts. Thus, gesture can provide an additional window into conceptual representations, which is not always evident in words.

Conference Program

Share this page