Out-of-Distribution Detection for LiDAR-based 3D Object Detection
MetadataShow full item record
3D object detection is an essential part of automated driving, and deep neural networks (DNNs) have achieved state-of-the-art performance for this task. However, deep models are notorious for assigning high confidence scores to out-of-distribution (OOD) inputs, that is, inputs that are not drawn from the training distribution. Detecting OOD inputs is challenging and essential for the safe deployment of models. OOD detection has been studied extensively for the classification task, but it has not received enough attention for the object detection task, specifically LiDAR-based 3D object detection. In this work, we focus on the detection of OOD inputs for LiDAR-based 3D object detection. We formulate what OOD inputs mean for object detection and propose to adapt several OOD detection methods for object detection. We accomplish this by our proposed feature extraction method. We also propose to use a contrastive loss to improve both the performance of the object detection and OOD detection methods. To evaluate OOD detection methods, we develop a simple but effective technique of generating OOD objects for a given object detection model. Our evaluation based on the KITTI dataset demonstrates that there is an improvement over the baseline. It also shows that different OOD detection methods have biases toward detecting specific OOD objects. It emphasizes the importance of combined OOD detection methods and more research in this direction.
Cite this version of the work
Van Duong Nguyen (2022). Out-of-Distribution Detection for LiDAR-based 3D Object Detection. UWSpace. http://hdl.handle.net/10012/17902