Skill Transfer from Multiple Human Demonstrators to a Robot Manipulator Using Neural Dynamic Motion Primitives
dc.contributor.author | Hanks, Geoffrey | |
dc.date.accessioned | 2024-08-15T13:46:20Z | |
dc.date.available | 2024-08-15T13:46:20Z | |
dc.date.issued | 2024-08-15 | |
dc.date.submitted | 2024-07-30 | |
dc.description.abstract | Programming by demonstration, also known as imitation learning, has shown potential in reducing the technical barriers to teaching complex skills to robot manipulators. It involves obtaining one or more demonstrations of how to complete a task, often from a human, which are then transferred to a robotic system. Dynamic Motion Primitives (DMPs) are an efficient method of learning trajectories from individual demonstrations using second-order dynamic equations. Research has been done to overcome some of the limitations of DMPs, by generalizing over multiple demonstrations, sequencing multiple primitives to complete goals involving multiple sub-tasks, and adding via-points for increased control over complex motions. However, accomplishing more complex tasks using DMP sequencing and via-points requires task specific knowledge so that the demonstrations can be segmented or annotated, and the breakdown of some tasks may be unintuitive. This can further increase the time and effort required to collect demonstrations beyond the already demanding process of collecting physical demonstrations, decreasing the feasibility of learning from demonstration in certain situations. This thesis applies state of the art Cartesian space DMPs that utilize physically collected and augmentation data to create a framework that can reduce the task specific knowledge and human effort required to teach robots multi-step tasks. DMPs that integrate neural networks are used, not only to generalize over multiple demonstrations from different demonstrators, but also to learn from complete demonstrations without requiring segmentation or annotation. For comparison, sequenced DMPs which require their demonstrations to be segmented into sub-tasks prior to learning are also implemented. Both techniques utilize physically collected demonstrations which are augmented to reduce the time and effort required to collect demonstrations, while ensuring sufficient samples for proper learning. The framework was tested on a pouring task which could be split into sub-tasks, and was tested both in simulation and on a 7 degree of freedom Franka Emika Panda robot manipulator. The task involved reaching for and grasping a container with water, pouring water into another container placed in the workspace, and returning the pouring container to its original location. Both sets of models were tested on their ability to recall trajectories shown in training, and generalize to new inputs. They were then implemented on the physical robotic system, and both methods were successful in completing the task. The trade-offs between the models trained on full and segmented demonstrations are discussed. While the sequenced DMPs were found to have reduced average error and greater flexibility, they required extra work and task knowledge to generate the demonstrations, and were reliant on specific subtasks being defined. It was determined that the models trained from full demonstrations using this framework could be an alternative to sequence primitives for more complex tasks. Despite a higher error between the demonstrations and predicted trajectories when compared to a sequence of DMPs, the full models are able to recall trajectories, generalize to new inputs well enough to complete the task on a physical robot. As such, they have the potential to reduce effort and task knowledge during demonstration preparation, and expand the applicability of imitation learning to a wider range of tasks. | |
dc.identifier.uri | https://hdl.handle.net/10012/20803 | |
dc.language.iso | en | |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | robotics | |
dc.subject | machine learning | |
dc.subject | learning from demonstration | |
dc.subject | imitation learning | |
dc.subject | neural dynamic motion primitives | |
dc.subject | robot manipulator | |
dc.title | Skill Transfer from Multiple Human Demonstrators to a Robot Manipulator Using Neural Dynamic Motion Primitives | |
dc.type | Master Thesis | |
uws-etd.degree | Master of Applied Science | |
uws-etd.degree.department | Mechanical and Mechatronics Engineering | |
uws-etd.degree.discipline | Mechanical Engineering | |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 1 year | |
uws.contributor.advisor | Hu, Yue | |
uws.contributor.affiliation1 | Faculty of Engineering | |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |