Show simple item record

dc.contributor.authorVianney, Jean Marie Uwabeza
dc.date.accessioned2020-01-22 15:19:51 (GMT)
dc.date.available2020-01-22 15:19:51 (GMT)
dc.date.issued2020-01-22
dc.date.submitted2020-01-17
dc.identifier.urihttp://hdl.handle.net/10012/15525
dc.description.abstractThe recent development in autonomous driving involves high-level computer vision and detailed road scene understanding. Today, most autonomous vehicles are using the mediated perception approach for path planning and control, which highly rely on high-definition 3D maps and real-time sensors. Recent research efforts aim to substitute the massive HD maps with coarse road attributes. In this thesis, We follow the direct perception-based method to train a deep neural network for affordance learning in autonomous driving. The goal and the main contributions of this thesis are in two folds. Firstly, to develop the affordance learning model based on freely available Google Street View panoramas and Open Street Map road vector attributes. Driving scene understanding can be achieved by learning affordances from the images captured by car-mounted cameras. Such scene understanding by learning affordances may be useful for corroborating base-maps such as HD maps so that the required data storage space is minimized and available for processing in real-time. We compare capability in road attribute identification between human volunteers and the trained model by experimental evaluation. The results indicate that this method could act as a cheaper way for training data collection in autonomous driving. The cross-validation results also indicate the effectiveness of the trained model. Secondly, We propose a scalable and affordable data collection framework named I2MAP (image-to-map annotation proximity algorithm) for autonomous driving systems. We built an automated labeling pipeline with both vehicle dynamics and static road attributes. The data collected and annotated under our framework is suitable for direct perception and end-to-end imitation learning. Our benchmark consists of 40,000 images with more than 40 affordance labels under various day time and weather even with very challenging heavy snow. We train and evaluate a ConvNet based traffic flow prediction model for driver warning and suggestion under low visibility condition.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://uwaterloo.ca/cognitive-autonomous-driving-lab/resourcesen
dc.subjectperception engineeringen
dc.subjectautonomous drivingen
dc.subjectself-drivingen
dc.subjectdata engineeringen
dc.subjectcomputer visionen
dc.subjectmachine learningen
dc.subjectaffordance learningen
dc.subjectscene understandingen
dc.subject.lcshComputer visionen
dc.subject.lcshAutomated vehiclesen
dc.subject.lcshMachine learningen
dc.titleStatic and Dynamic Affordance Learning in Vision-based Direct Perception for Autonomous Drivingen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentMechanical and Mechatronics Engineeringen
uws-etd.degree.disciplineMechanical Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws.contributor.advisorCao, Dongpu
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages