Sparse2SOAP: Domain Adaptation for LiDAR-Based 3D Object Detection
MetadataShow full item record
In this work, we propose Sparse2SOAP, an extension of the previous work in Sparse2Dense that uses knowledge distillation in a teacher-student framework to densify 3D features, to enable its uses for cross-domain LiDAR-based 3D object detection in autonomous driving. This is achieved by utilizing Stationary Object Aggregation Pseudo-labelling (SOAP) from prior work, to generate high-quality pseudo-labels for Quasi-Stationary (QS) dense point cloud objects in Simply Aggregated (SA) point clouds. The dense object pseudo-labels can then be paired with the corresponding sparse objects pseudo-labels creating dense-sparse pairs for knowledge distillation. We additionally propose a masking method for handling knowledge distillation for dynamic objects. We evaluate the proposed method using nuScenes and Waymo datasets for Unsupervised Domain Adaptation (UDA) tasks. We observe an increase in mAP and AP for classes with many QS objects. To the best of our knowledge, we are the first to perform feature alignment between sparse and dense point cloud representations using aggregated point clouds in the context of UDA.
Cite this version of the work
Christopher Mannes (2023). Sparse2SOAP: Domain Adaptation for LiDAR-Based 3D Object Detection. UWSpace. http://hdl.handle.net/10012/19486