Browsing by Author "Aliasghari, Pourya"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Enhancing Social Learning in Humanoid Robots Taught by Non-Expert Human Teachers(University of Waterloo, 2025-09-30) Aliasghari, PouryaTools that assist with daily tasks are valuable. For example, with the aging population in Canada and worldwide, there is a growing demand for ways to help older adults perform daily activities independently. Socially intelligent robots can promote independence by assisting with routine tasks. While advanced robots may be capable of performing various specialized operations, it is not feasible for their designers to program them in advance to effectively carry out multi-step, complex tasks requiring high-level planning and coordination, `out of the box' in new environments and for users with diverse preferences. To successfully integrate into domestic environments, robots must learn new task knowledge from human users. Many of our own skills as human beings have been acquired through social learning, i.e., learning via observation of or interaction with others, throughout our lifetime. Social learning for robots enables the transfer of skills without the need for explicit programming, allowing users to teach robots via natural, intuitive, and interactive methods. This thesis targets three key challenges in the social learning of robots: enabling non-expert humans to teach robots without external help, enabling robots to learn and perform multi-step tasks, and enabling robots to identify the most suitable teachers in their social learning. The first phase of my research examines whether or not participants with no prior experience teaching a robot could become more proficient robot teachers through repeated human-robot teaching interactions. An experiment was conducted with twenty-eight participants who were asked to kinesthetically teach a Pepper robot various cleaning tasks across five repeated sessions. Analysis of the data revealed a diversity in non-experts' human-robot teaching styles in repeated interaction. Most participants significantly improved both the success rate and speed of their kinesthetic demonstrations after multiple rounds of teaching the robot. The second phase introduces a novel, biologically inspired imitation approach enabling robots to understand and perform complex tasks using high-level programs that incorporate sequential regularities between sub-goals that a robot can recognize and achieve. To learn a new task, the system processes demonstrations to identify multiple possible arrangements of sub-goals that achieve the overall task goal. For task execution, the robot determines the optimal sequence of actions by evaluating the available sequences based on user-defined criteria, through mental simulation of the real task. This learning architecture was implemented on an iCub humanoid robot, and its effectiveness was evaluated across multiple scenarios. In the third phase, I propose an attribute for identifying the most suitable teachers for a robot: human teachers’ awareness of and attention to the robot’s limitations and capabilities. I investigate the impact of this attribute on robot learning outcomes in an experiment with seventy-two participants who taught three physical tasks to an iCub humanoid robot. Teachers’ awareness of the robot’s visual limitations and learning capabilities was manipulated by offering the robot’s visual perspective and by placing participants in the robot’s position when labelling actions in demonstrations. Participants who could see the robot’s vision output paid increased attention to ensuring that task objects in their demonstrations were visible to the robot. This emphasis on attention resulted in improved learning outcomes for the robot, as indicated by lower perception error rates and higher learning scores. I also propose a metric for robots to estimate the potential for receiving high-quality demonstrations from particular human teachers. These findings demonstrate the feasibility of non-experts adapting to robot teaching through repeated exposure to human-robot teaching tasks, without formal training or external intervention, and also contribute to understanding factors in human teachers that lead to better learning outcomes for robots. Furthermore, I propose a robot learning approach that accommodates variations in human teaching styles, enabling robots to perform tasks with greater flexibility and efficiency. Together, these contributions advance the development of multifunctional and adaptable robots capable of operating autonomously and safely in human environments to assist individuals in various daily activities.Item How Do Different Modes of Verbal Expressiveness of a Student Robot Making Errors Impact Human Teachers' Intention to Use the Robot?(Association for Computing Machinery, 2021-11-09) Aliasghari, Pourya; Ghafurian, Moojan; Nehaniv, Chrystopher L.; Dautenhahn, KerstinWhen humans make a mistake, they often try to employ some strategies to manage the situation and possibly mitigate the negative effects of the mistake. Robots that operate in the real world will also make errors and therefore might benefit from such recovery strategies. In this work, we studied how different verbal expression strategies of a trainee humanoid robot when committing an error after learning a task influence participants’ intention to use it. We performed a virtual experiment in which the expression modes of the robot were as follows: (1) being silent; (2) verbal expression but ignoring any errors; or (3) verbal expression while mentioning any error by apologizing, as well as acknowledging and justifying the error. To simulate teaching, participants remotely demonstrated their preferences to the robot in a series of food preparation tasks; however, at the very end of the teaching session, the robot made an error (in two of the three experimental conditions). Based on data collected from 176 participants, we observed that, compared to the mode where the robot remained silent, both modes where the robot utilized verbal expression could significantly enhance participants' intention to use the robot in the future if it made an error in the last practice round. When no error occurred at the end of the practice rounds, a silent robot was preferred and increased participants' intention to use.