Human-Inspired Robot Task Teaching and Learning

Loading...
Thumbnail Image

Date

2009-10-28T19:33:05Z

Authors

Wu, Xianghai

Advisor

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups. The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback. The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames. An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction. A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.

Description

Keywords

robot task learning from human teaching, robot programming by demonstration, Intuitive task teaching, intuitive human-robot interaction, trajectory segmentation, ground abstract task knowledge to robot sensor data, task expertise exploitation, timely feedback

LC Subject Headings

Citation