Skill Transfer from Multiple Human Demonstrators to a Robot Manipulator Using Neural Dynamic Motion Primitives

dc.contributor.authorHanks, Geoffrey
dc.date.accessioned2024-08-15T13:46:20Z
dc.date.available2024-08-15T13:46:20Z
dc.date.issued2024-08-15
dc.date.submitted2024-07-30
dc.description.abstractProgramming by demonstration, also known as imitation learning, has shown potential in reducing the technical barriers to teaching complex skills to robot manipulators. It involves obtaining one or more demonstrations of how to complete a task, often from a human, which are then transferred to a robotic system. Dynamic Motion Primitives (DMPs) are an efficient method of learning trajectories from individual demonstrations using second-order dynamic equations. Research has been done to overcome some of the limitations of DMPs, by generalizing over multiple demonstrations, sequencing multiple primitives to complete goals involving multiple sub-tasks, and adding via-points for increased control over complex motions. However, accomplishing more complex tasks using DMP sequencing and via-points requires task specific knowledge so that the demonstrations can be segmented or annotated, and the breakdown of some tasks may be unintuitive. This can further increase the time and effort required to collect demonstrations beyond the already demanding process of collecting physical demonstrations, decreasing the feasibility of learning from demonstration in certain situations. This thesis applies state of the art Cartesian space DMPs that utilize physically collected and augmentation data to create a framework that can reduce the task specific knowledge and human effort required to teach robots multi-step tasks. DMPs that integrate neural networks are used, not only to generalize over multiple demonstrations from different demonstrators, but also to learn from complete demonstrations without requiring segmentation or annotation. For comparison, sequenced DMPs which require their demonstrations to be segmented into sub-tasks prior to learning are also implemented. Both techniques utilize physically collected demonstrations which are augmented to reduce the time and effort required to collect demonstrations, while ensuring sufficient samples for proper learning. The framework was tested on a pouring task which could be split into sub-tasks, and was tested both in simulation and on a 7 degree of freedom Franka Emika Panda robot manipulator. The task involved reaching for and grasping a container with water, pouring water into another container placed in the workspace, and returning the pouring container to its original location. Both sets of models were tested on their ability to recall trajectories shown in training, and generalize to new inputs. They were then implemented on the physical robotic system, and both methods were successful in completing the task. The trade-offs between the models trained on full and segmented demonstrations are discussed. While the sequenced DMPs were found to have reduced average error and greater flexibility, they required extra work and task knowledge to generate the demonstrations, and were reliant on specific subtasks being defined. It was determined that the models trained from full demonstrations using this framework could be an alternative to sequence primitives for more complex tasks. Despite a higher error between the demonstrations and predicted trajectories when compared to a sequence of DMPs, the full models are able to recall trajectories, generalize to new inputs well enough to complete the task on a physical robot. As such, they have the potential to reduce effort and task knowledge during demonstration preparation, and expand the applicability of imitation learning to a wider range of tasks.
dc.identifier.urihttps://hdl.handle.net/10012/20803
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectrobotics
dc.subjectmachine learning
dc.subjectlearning from demonstration
dc.subjectimitation learning
dc.subjectneural dynamic motion primitives
dc.subjectrobot manipulator
dc.titleSkill Transfer from Multiple Human Demonstrators to a Robot Manipulator Using Neural Dynamic Motion Primitives
dc.typeMaster Thesis
uws-etd.degreeMaster of Applied Science
uws-etd.degree.departmentMechanical and Mechatronics Engineering
uws-etd.degree.disciplineMechanical Engineering
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms1 year
uws.contributor.advisorHu, Yue
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Hanks_Geoffrey.pdf
Size:
17.21 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: