Chinmaya Mishra on Facial Cues in Human-Robot Interaction at IIIT Delhi

09 January 2025
 A man in a blue polo shirt and jeans presents in a lecture hall, standing near a whiteboard. Behind him, a slide titled "Automate Robot Expressions using LLMs" shows a flow diagram and a robotic face.
Chinmaya Mishra recently delivered an insightful talk at IIIT Delhi titled “The Perception and Generation of Facial Cues in Human-Robot Interaction (HRI)”, shedding light on the critical role of non-verbal behaviors in human-robot communication.

He emphasized that with rapid advancements in artificial intelligence and robotics, social robots are poised to have significant social integration. These robots are specifically designed to conduct human-like interactions. Therefore, understanding and replicating essential non-verbal cues, such as facial expressions and gaze, are necessary to enhance these robotic systems' effectiveness, human-likeness, and acceptance.

Chinmaya highlighted two key aspects of HRI (Human-Robot Interaction):                       1- Automatic Generation of Non-Verbal Behaviors                                                       2- Influence on Human-Robot Interaction

He discussed various studies that have tried to model the real-time gaze and affective behavior of social robots during HRI while also evaluating the influence such behaviors have on the overall interaction. He elaborated on Gaze Control Systems (GCS), and architectures developed to automate communicative gaze behaviors such as turn-taking, joint attention, and intimacy regulation. Additionally, he explored the use of Large Language Models (LLMs) to automate the affective behaviors of robots. Through these discussions, he addressed the question: Do robots need a face?

For more details, visit LinkedIn.

Share this page