Chinmaya Mishra on Facial Cues in Human-Robot Interaction at IIIT Delhi
He emphasized that with rapid advancements in artificial intelligence and robotics, social robots are poised to have significant social integration. These robots are specifically designed to conduct human-like interactions. Therefore, understanding and replicating essential non-verbal cues, such as facial expressions and gaze, are necessary to enhance these robotic systems' effectiveness, human-likeness, and acceptance.
Chinmaya highlighted two key aspects of HRI (Human-Robot Interaction): 1- Automatic Generation of Non-Verbal Behaviors 2- Influence on Human-Robot Interaction
He discussed various studies that have tried to model the real-time gaze and affective behavior of social robots during HRI while also evaluating the influence such behaviors have on the overall interaction. He elaborated on Gaze Control Systems (GCS), and architectures developed to automate communicative gaze behaviors such as turn-taking, joint attention, and intimacy regulation. Additionally, he explored the use of Large Language Models (LLMs) to automate the affective behaviors of robots. Through these discussions, he addressed the question: Do robots need a face?
For more details, visit LinkedIn.
Share this page