Detection of Small Objects in UAV Images via an Improved Swin Transformer-based Model
MetadataShow full item record
Automated detection of small objects such as vehicles in images of complex urban environments taken by unmanned aerial vehicles (UAV) is one of the most challenging tasks in computer vision and remote sensing communities, with various applications ranging from traffic congestion surveillance to vision systems in intelligent transportation. Deep learning models, most of which are based on convolutional neural networks (CNNs), have been commonly used to automatically detect objects in UAV images. However, the detection accuracy is still often unsatisfactory due to the shortcomings of CNNs. For instance, CNN collects data from nearby pixels, but spatial information is lost due to the pooling operations. As such, it is difficult for CNNs to model certain long-range dependencies. In this thesis, we propose a Swin Transformer-based model that incorporates convolutions with the Swin Transformer to extract more local information, mitigating the problem of small object detection from complex backgrounds in UAV images and further improving the detection accuracy. By using the Swin Transformer, our model leverages both the local feature extraction of convolutions and the global feature modeling of transformers. The framework was designed with two main modules, a local context enhancement (LCE) module and a Residual U-Feature Pyramid Network (RSU-FPN) module. The LCE module is used to implement dilated convolution and increase the receptive field of each image pixel. By combining with the Swin Transformer block, it can efficiently encode various spatial contextual information and detect local associations and structural information within UAV images. In addition, the RSU-FPN module is designed as a two-level nested U-shaped structure with skip connections to integrate multi-scale feature maps. A loss function combining normalized Gaussian Wasserstein distance and L1 loss is also introduced, which allows the model to be trained using imbalanced data. The proposed method was compared with the state-of-the-art methods on the UAVDT dataset and Vis-Drone dataset. Our experimental results obtained on the UAVDT dataset indicated that our proposed method increased the average precision (AP) by 21.6%, 22.3% and 25.5% over Cascade R-CNN, PVT and Dynamic R-CNN detectors, respectively, demonstrating its effectiveness and reliability on small object detection from UAV images.
Cite this version of the work
Weidong Liang (2023). Detection of Small Objects in UAV Images via an Improved Swin Transformer-based Model. UWSpace. http://hdl.handle.net/10012/19468