Sun, Chen2022-12-192022-12-192022-12-192022-12-06http://hdl.handle.net/10012/18964Recent technological advances in Autonomous Driving Systems (ADS) show promise in increasing traffic safety. One of the critical challenges in developing ADS with higher levels of driving automation is to derive safety requirements for its components and monitor the system's performance to ensure safe operation. The Operational Design Domain (ODD) for ADS confines ADS safety to the context of its function. The ODD represents the operating environment within which an ADS operates and satisfies the safety requirements. To reach a state of "informed safety", the system's ODD must be explored and well-tested in the development phase. At the same time, the ADS must monitor the operating conditions and corresponding risks in real-time. Existing research and technologies do not directly express the ODD quantitatively, nor do they have a general monitoring strategy designed to handle the learning-based system, which is heavily used in the recent ADS technologies. The safety-critical nature of the ADS requires us to provide thorough validation, continual improvement, and safety monitoring of these data-driven dependent modules. In this dissertation, the ODD extraction, augmentation, and real-time monitoring of the ADS with machine learning components are investigated. There are three major components for the ODD of the ADS with machine learning components for general safety issues. In the first part, we propose a framework to systematically specify and extract the ODD, including the environment modeling and formal and quantitative safety specifications for models with machine learning parts. An empirical demonstration of the ODD extraction process based on predefined specifications is presented with the proposed environment model. In the second part, the ODD augmentation in the development phase is modelled as an iterative engineering problem solved by robust learning to handle unseen future natural variations. The vision tasks in ADS are the major focus here, and the effectiveness of model-based robustness training is demonstrated, which can improve model performance and the application of extracting edge cases during the iterative process. Furthermore, the testing procedure provides us with valuable priors on the probability of failures in the known testing environment, which can be further utilized in the real-time monitoring procedure. Finally, a solution for online ODD monitoring that utilizes the knowledge from the offline validation process as Bayesian graphical models to improve safety warning accuracy is provided. While the algorithms and techniques proposed in this dissertation can be applied to many safety-critical robotic systems with machine learning components, in this dissertation the main focus lies on the implications for autonomous driving.enOperational Design DomainRobust LearningODD MonitoringODD OntologyOperational Design Domain Monitoring and Augmentation for Autonomous DrivingDoctoral Thesis