Learning a Motion Policy to Navigate Environments with Structured Uncertainty

dc.contributor.authorTsang, Florence
dc.date.accessioned2020-01-24T14:26:16Z
dc.date.available2020-01-24T14:26:16Z
dc.date.issued2020-01-24
dc.date.submitted2020-01-13
dc.description.abstractNavigating in uncertain environments is a fundamental ability that robots must have in many applications such as moving goods in a warehouse or transporting materials in a hospital. While much work has been done on navigation that reacts to unexpected obstacles, there is a lack of research in learning to predict where obstacles may appear based on historical data and utilizing those predictions to form better plans for navigation. This may increase the efficiency of a robot that has been working in the same environment for a long period of time. This thesis first introduces the Learned Reactive Planning Problem (LRPP) that formalizes the above problem and then proposes a method to capture past obstacle information and their correlations. We introduce an algorithm that uses this information to make predictions about the environment and forms a plan for future navigation. The plan balances exploiting obstacle correlations (ie. observing obstacle A is present means obstacle B is present as well) and moving towards the goal. Our experiments in an idealized simulation show promising results of the robot outperforming a commonly used optimistic algorithm. Second, we introduce the Learn a Motion Policy (LAMP) framework that can be added to navigation stacks on real robots. This framework aims to move the problem of predicting and navigating through uncertainties from idealized simulations to realistic settings. Our simulation results in Gazebo and experiments on a real robot show that the LAMP framework has potential to improve upon existing navigation stacks as it confirms the results from the idealized simulation, while also highlighting challenges that still need to be addressed.en
dc.identifier.urihttp://hdl.handle.net/10012/15562
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://github.com/nightingale0131/lampen
dc.relation.urihttps://github.com/nightingale0131/lrppen
dc.subjectrobot navigationen
dc.subjectreinforcement learningen
dc.subjectmotion planningen
dc.subject.lcshRobots--Motionen
dc.subject.lcshUncertainty (Information theory)en
dc.subject.lcshRobots--Programmingen
dc.subject.lcshRobots--Control systems.en
dc.titleLearning a Motion Policy to Navigate Environments with Structured Uncertaintyen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.degree.departmentElectrical and Computer Engineeringen
uws-etd.degree.disciplineElectrical and Computer Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws.comment.hiddenThe following needs to be placed somewhere on the page with my thesis: In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of the University of Waterloo's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.en
uws.contributor.advisorSmith, Stephen
uws.contributor.affiliation1Faculty of Engineeringen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Tsang_Florence.pdf
Size:
4.44 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: