Ezgi Mamus Presened at AMLaP Conference

07 September 2024
 A speaker presents findings on gesture differences between blind and sighted speakers, with a summary slide projected on screen.

















On Saturday, 7 September, Ezgi Mamus presented a collaborative paper titled “Differences in the Gesture Kinematics of Blind and Sighted Speakers Reveal Insights into the Functions of Gestures,” at the AMLaP (Architectures and Mechanisms for Language Processing) conference. The paper, co-authored by Mounika Kanakanti and Asli Ōzyürek, delves into the distinctive gesture patterns of blind, blindfolded, and sighted speakers, offering valuable insights into how visual experience shapes gesture use.

People spontaneously gesture while speaking and even while thinking; thus, the role of gestures extends beyond communication. However, it is not straightforward to differentiate whether a gesture is produced for its self-oriented cognitive function (e.g., helping speakers manipulate their spatial-motoric representations for speaking) or for its communicative function and whether kinematic features of gestures are shaped by their function in speech. Individuals who are blind from birth provide a unique case to test this, as the communicative function of gesture might be less relevant for them than for sighted individuals. Indeed, earlier studies showed that blind speakers produce fewer spontaneous gestures than sighted speakers when describing events. The present work aims to go beyond quantitative measures and explore whether gesture kinematics are influenced by visual experience to gain insight into the functions of gesture in both blind and sighted individuals. Our findings show that blind people’s gestures reflect a more precise mapping of spatial sounds to space, and also, sighted people’s gestures are shaped by communicative pressures of having seen and/or knowing how an addressee perceives their gestures.

Share this page