The complex and largely unstructured nature of real-world situations makes it challenging for conventional closed-world robot learning solutions to adapt to such interaction dynamics. These challenges become particularly pronounced in long-term interactions where robots need to go beyond their past learning to continuously evolve with changing environment settings and personalize towards individual user behaviors. In contrast, open-world learning embraces the complexity and unpredictability of the real world, enabling robots to be "lifelong learners" that continuously acquire new knowledge and navigate novel challenges, making them more context-aware while intuitively engaging the users. Adopting the theme of "open-world learning", the fourth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop seeks to bring together interdisciplinary perspectives on real-world applications in human-robot interaction (HRI), including education, rehabilitation, elderly care, service, and companionship. The goal of the workshop is to foster collaboration and understanding across diverse scientific communities through invited keynote presentations and in-depth discussions facilitated by contributed talks, a break-out session, and a debate.
Causal-HRI: Causal Learning for Human-Robot Interaction
Jiaee Cheong, Nikhil Churamani, Luke Guerdan, Tabitha Edith Lee, and 2 more authors
In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024
Real-world Human-Robot Interaction (HRI) requires robots to adeptly perceive and understand the dynamic human-centred environments in which they operate. Recent decades have seen remarkable advancements that have endowed robots with exceptional perception capabilities. The first workshop on "Causal-HRI: Causal Learning for Human-Robot Interaction" aims to bring together research perspectives from Causal Discovery and Inference and Causal Learning, in general, to real-world HRI applications. The objective of this workshop is to explore strategies that will not only embed robots with capabilities to discover cause-and-effect relationships from observations, allowing them to generalise to unseen interaction settings, but also to enable users to understand robot behaviours, moving beyond the ’black-box’ models used by these robots. This workshop aims to facilitate an exchange of views through invited keynote presentations, contributed talks, group discussions and poster sessions, encouraging collaborations across diverse scientific communities. The theme of HRI 2024, "HRI in the real world," will inform the overarching theme of this workshop, encouraging discussions on HRI theories, methods, designs and studies focused on leveraging Causal Learning for enhancing real-world HRI.
Federated Learning of Socially Appropriate Agent Behaviours in Simulated Home Environments
Saksham Checker, Nikhil Churamani, and Hatice Gunes
In Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI), 19th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2024
Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.
Affective Computing for Human-Robot Interaction Research: Four Critical Lessons for the Hitchhiker
Hatice Gunes, and Nikhil Churamani
In The 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2023
In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous. However, despite the significant and undoubted progress of the field in addressing the issue of catastrophic forgetting, benchmarking different continual learning approaches is a difficult task by itself. In fact, given the proliferation of different settings, training and evaluation protocols, metrics and nomenclature, it is often tricky to properly characterize a continual learning algorithm, relate it to other solutions and gauge its real-world applicability. The first Continual Learning in Computer Vision challenge held at CVPR in 2020 has been one of the first opportunities to evaluate different continual learning algorithms on a common hardware with a large set of shared evaluation metrics and 3 different settings based on the realistic CORe50 video benchmark. In this paper, we report the main results of the competition, which counted more than 79 teams registered and 11 finalists. We also summarize the winning approaches, current challenges and future research directions.
Affect-Driven Learning of Robot Behaviour for Collaborative Human-Robot Interactions
Nikhil Churamani, Pablo Barros, Hatice Gunes, and Stefan Wermter
Collaborative interactions require social robots to share the users’ perspective on the interactions and adapt to the dynamics of their affective behaviour. Yet, current approaches for affective behaviour generation in robots focus on instantaneous perception to generate a one-to-one mapping between observed human expressions and static robot actions. In this paper, we propose a novel framework for affect-driven behaviour generation in social robots. The framework consists of (i) a hybrid neural model for evaluating facial expressions and speech of the users, forming intrinsic affective representations in the robot, (ii) an Affective Core, that employs self-organising neural models to embed behavioural traits like patience and emotional actuation that modulate the robot’s affective appraisal, and (iii) a Reinforcement Learning model that uses the robot’s appraisal to learn interaction behaviour. We investigate the effect of modelling different affective core dispositions on the affective appraisal and use this affective appraisal as the motivation to generate robot behaviours. For evaluation, we conduct a user study (n = 31) where the NICO robot acts as a proposer in the Ultimatum Game. The effect of the robot’s affective core on its negotiation strategy is witnessed by participants, who rank a patient robot with high emotional actuation higher on persistence, while an impatient robot with low emotional actuation is rated higher on its generosity and altruistic behaviour.
Learning Socially Appropriate Robo-Waiter Behaviours through Real-Time User Feedback
Emily McQuillin, Nikhil Churamani, and Hatice Gunes
In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, 2022
Current Humanoid Service Robot (HSR) behaviours mainly rely on static models that cannot adapt dynamically to meet individual customer attitudes and preferences. In this work, we focus on empowering HSRs with adaptive feedback mechanisms driven by either implicit reward, by estimating facial affect, or explicit reward, by incorporating verbal responses of the human ’customer’. To achieve this, we first create a custom dataset, annotated using crowd-sourced labels, to learn appropriate approach (positioning and movement) behaviours for a Robo-waiter. This dataset is used to pre-train a Reinforcement Learning (RL) agent to learn behaviours deemed socially appropriate for the robo-waiter. This model is later extended to include separate implicit and explicit reward mechanisms to allow for interactive learning and adaptation from user social feedback. We present a within-subjects Human-Robot Interaction (HRI) study with 21 participants implementing interactions between the robo-waiter and human customers implementing the above-mentioned model variations. Our results show that both explicit and implicit adaptation mechanisms enabled the adaptive robo-waiter to be rated as more enjoyable and sociable, and its positioning relative to the participants as more appropriate compared to using the pre-trained model or a randomised control implementation.
Domain-Incremental Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition
Towards Fair Affective Robotics: Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition
Ozgur Kara, Nikhil Churamani, and Hatice Gunes
In Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI), 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2021
iCub: Learning Emotion Expressions using Human Reward
Nikhil Churamani, Francisco Cruz, Sascha Griffiths, and Pablo Barros
In Workshop on Bio-inspired Social Robot Learning in Home Scenarios, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Aug 2016