UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Adversarial Machine Learning and Defenses for Automated and Connected Vehicles

Loading...
Thumbnail Image

Date

2024-04-18

Authors

Zhang, Dayu

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

This thesis delves into the realm of adversarial machine learning within the context of Connected and Automated Vehicles (CAVs), presenting a comprehensive study on the vulnerabilities and defense mechanisms against adversarial attacks in two critical areas: object detection and decision-making systems. The research firstly introduces a novel adversarial patch generation technique targeting the YOLOv5 object detection algorithm. It presents a comprehensive study in the different transformations and parameters that change the effectiveness of the patch. The patch is then implemented within the CARLA simulation environment to assess robustness under varied real-world conditions, such as changing weather and lighting. With all the transformation applied during generation, the patch is able to reduce the confidence of YOLO5 detecting the stop sign by 70% comparing to the original stop sign if the lighting condition is good. However if the lighting condition is sub-optimal, for example, during a raining weather, the patch only reduce the confidence by 38% due to the patch being harder to be detected. Overall, the optimized patch still shows a greater effect on detection evasion compares to a random noise patch on any environment conditions. Overall, this part of the research showcase a novel way of generating adversarial patches and a new approach of testing the patches in a open-source simulator, CARLA, for better autonomous vehicle testing against adversarial attacks in the future. Simultaneously, this thesis investigates the susceptibility of Deep Reinforcement Learning (DRL) algorithms, in particular, Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) algorithms, to black-box adversarial attacks executed through zeroth-order optimization methods like ZO-SignSGD in a lane-changing scenario. The research first train the policies with finely turned hyper-parameters in the lane-changing environment and achieving a high performance. With a good policy as a base, the black-box attack successfully fooled both algorithms by optimally changing the state value to force the policy going straight while maintaining a small perturbation size compare to the original. While under attack, both DQN and DDPG are unable to perform, achieving an average of reward 108 and 45 comparing to their original performance of 310 and 232 respectively. A preliminary study on the effect of adversarial defense is also performed, which shows resistance against the attack and achieving slightly increase in average reward. This part of research uncovers significant vulnerabilities, demonstrating substantial performance degradation in DRL when used in the decision making of an autonomous vehicle. At last, the study underscores the importance of enhancing the security and resilience of machine learning algorithms embedded in CAV systems. Through a dual-focus on offensive and defensive strategies, including the exploration of adversarial training, this work contributes to the foundational understanding of adversarial threats in autonomous driving and advocates for the integration of robust defense mechanisms to ensure the safety and reliability of future autonomous transportation systems.

Description

Keywords

autonomous vehicle, reinforcement learning, deep learning, machine vision, adversarial machine learning, security, zeroth-order optimization

LC Keywords

Citation