Show simple item record

dc.contributor.authorLiang, Rocky 14:10:01 (GMT) 14:10:01 (GMT)
dc.description.abstractThis thesis presents two learning based approaches to solve the autonomous driving problem: end-to-end imitation learning and direct visual perception. Imitation learning uses expert demonstrations to build a policy that serves as a sensory stimulus to action mapping. During inference, the policy takes in readings from the vehicle's onboard sensors such as cameras, radars, and lidars, and converts them to driving signals. Direct perception on the other hand uses these sensor readings to predict a set of features that define the system's operational state, or affordances, then these affordances are used by a physics based controller to drive the vehicle. To reflect the context specific, multimodal nature of the driving task, these models should be aware of the context, which in this case is driver intention. During development of the imitation learning approach, two methods of conditioning the model were trialed. The first was providing the context as an input to the network, and the second was using a branched model with each branch representing a different context. The branched model showed superior performance, so branching was used to bring context awareness to the direct perception model as well. There were no preexisting datasets to train the direct perception model, so a simulation based data recorder was built to create training data. By creating new data that included lane change behavior, the first direct perception model that includes lane change capabilities was trained. Lastly, a kinematic and a dynamic controller were developed to complete the direct perception pipeline. Both take advantage of having access to road curvature. The kinematic controller has a hybrid feedforward-feedback structure where the road curvature is used as a feedforward term, and lane deviations are used as feedback terms. The dynamic controller is inspired by model predictive control. It iteratively solves for the optimal steering angle to get the vehicle to travel in a path that matches the reference curvature, while also being assisted by lane deviation feedback.en
dc.publisherUniversity of Waterlooen
dc.subjectautonomous drivingen
dc.subjectimitation learningen
dc.subjectdirect perceptionen
dc.subjectvehicle dynamicsen
dc.subjectdeep learningen
dc.titleImitation Learning and Direct Perception for Autonomous Drivingen
dc.typeMaster Thesisen
dc.pendingfalse and Mechatronics Engineeringen Engineeringen of Waterlooen
uws-etd.degreeMaster of Scienceen
uws.contributor.advisorCao, Dongpu
uws.contributor.affiliation1Faculty of Engineeringen

Files in this item


This item appears in the following Collection(s)

Show simple item record


University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages