Show simple item record

dc.contributor.authorChen, Yinghan
dc.date.accessioned2024-02-26 14:35:57 (GMT)
dc.date.available2024-02-26 14:35:57 (GMT)
dc.date.issued2024-02-26
dc.date.submitted2024-02-19
dc.identifier.urihttp://hdl.handle.net/10012/20369
dc.description.abstractThe goal of learning from demonstration or imitation learning is to teach the model to generalize across unseen tasks based on available demonstrations. This ability can be important for the stable performance of a robot in a chaotic environment such as a kitchen when compared to a more structured setting such as a factory assembly line. By leaving the task learning up to the algorithm, human teleoperators can dictate the action of robots without any programming knowledge and improve overall productivity in various settings. Due to the difficulty of manually collecting gripper trajectories in large qualities, successful application of learning from demonstrations would have to be able to learn from a sparse number of examples while still providing a high degree of predicted trajectory accuracy. Inspired by the development of transformer models for large language model tasks such as sentence translation and text generation, I seek to modify the model for trajectory prediction. While there have been previous works that managed to train end-to-end models capable of taking images and contexts and then generating control output, those works rely on a massive quantity of demonstrations and detailed annotations. To facilitate the training process for a sparse number of demonstrations, we created a training pipeline that includes a DeeplabCut model for object position prediction, followed by the Task-Parameterized Transformer model for learning the demonstrated trajectories, and supplemented with data augmentations that allow the model to overcome the constraint of limited dataset. The resulting model is capable of outputting the predicted end effector gripper trajectory and pose at each time step with better accuracy than previous works in trajectory prediction.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectmachine learningen
dc.subjectroboticsen
dc.subjecttrajectory predictionen
dc.subjectlearning from demonstrationsen
dc.subjectimitation learningen
dc.titleTask-Parameterized Transformer for Learning Gripper Trajectory from Demonstrationsen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentSystems Design Engineeringen
uws-etd.degree.disciplineSystem Design Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorTripp, Bryan
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages