UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Environment Modeling, Action Classification, and Control for Urban Automated Driving

Loading...
Thumbnail Image

Date

2022-12-23

Authors

Dempster, Rowan

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

This thesis discusses the design and implementation of WATonomous' Automated Driving Stack (ADS), which is capable of performing robo-taxi services in specific operational domains when deployed to WATonomous' research vehicle (Bolty). Three ADS modules are discussed in detail: (1) mapping, environment modeling, and behavioral planning, (2) action classification in video streams, and (3) trajectory planning and control. Additionally, the software architecture within which the ADS is developed and deployed, and the ADS data pipeline itself, are outlined. The thesis begins with preliminaries on WATonomous' Dockerized software architecture (coined watod) which runs and orchestrates the communication of the ADS modules. The watod ecosystem, due to its Dockerized and cloud-based design, enables rapid prototyping of new software modules, rapid onboarding of new team members, and parallel execution of many ADS development instances on the WATonomous server cluster's Virtual Machines (VMs). Cloud-based CARLA simulation development of the ADS and deployment to the Bolty research vehicle are also encapsulated in and facilitated by the watod ecosystem. The ADS can be developed in simulation and deployed to the physical research vehicle without modifications to the ADS modules due to the replication of the physical platform in the Carla ROS Bridge sensor configuration. The design of the ADS data pipeline is also presented, from raw sensor input to the Controlled Area Network Bus (CAN Bus) interface, as well as the human-computer interface. The first ADS module discussed is the mapping and environment modeling module. Environment modeling is the backbone of how autonomous agents understand the world, and therefore has significant implications for decision-making and verification. Motivated by the success of relational mapping tools such as Lanelet2, we present the Dynamic Relation Graph (DRG). The DRG is a novel method for extending prior relational maps to include online observations, creating a unified environment model which incorporates both prior and online data sources. Our prototype implementation models a finite set of heterogeneous features including road signage and pedestrian movement. However, the methodology behind the DRG can be expanded to a wider range of features in a fashion that does not increase the complexity of behavioral planning. Simulated stress tests indicate the DRG's effectiveness in decreasing decision-making complexity, and deployment to the WATonomous research vehicle demonstrates its practical utility. The prototype code is available at https://github.com/WATonomous/DRG. The second ADS module discussed is the action classification module. When applied in the context of Autonomous Vehicles (AVs), action classification algorithms can help enrich an AV's environment model and understanding of the world to improve behavioral planning decisions. Towards these improvements in AV decision-making, we propose a novel online action recognition system, coined the Road Action Detection Network (RAD-Net). RAD-Net formulates the problem of active agent detection and adapts ideas about actor-context relations from human activity recognition in a straightforward two-stage pipeline for action detection and classification. We show that our proposed scheme can outperform the baseline on the ICCV 2021 Road Challenge dataset. Furthermore, by integrating RAD-Net with the ADS' perception stack and the DRG, we demonstrate how a higher-order understanding of agent actions in the environment can improve decisions on a real AV system. The last ADS module discussed is trajectory planning and control. Trajectory planning and control have historically been separated into two modules in automated driving stacks. Trajectory planning focuses on higher-level tasks like avoiding obstacles and staying on the road surface, whereas the controller tries its best to follow an ever changing reference trajectory. We argue that this separation is (1) flawed due to the mismatch between planned trajectories and what the controller can feasibly execute, and (2) unnecessary due to the flexibility of the Model Predictive Control (MPC) paradigm. Instead, in this thesis, we present a unified MPC-based trajectory planning and control scheme that guarantees feasibility with respect to road boundaries, the static and dynamic environment, and enforces passenger comfort constraints. The scheme is evaluated rigorously in a variety of scenarios focused on proving the effectiveness of the OCP design and real-time solution methods. The prototype code is available at github.com/WATonomous/control.

Description

Keywords

autonomous driving, environment modeling, mapping, decision making, action classification, trajectory planning, optimal control, automated driving

LC Keywords

Citation