Show simple item record

dc.contributor.authorLiang, Weidong
dc.date.accessioned2023-05-23 17:40:45 (GMT)
dc.date.available2023-05-23 17:40:45 (GMT)
dc.date.issued2023-05-23
dc.date.submitted2023-05-11
dc.identifier.urihttp://hdl.handle.net/10012/19468
dc.description.abstractAutomated detection of small objects such as vehicles in images of complex urban environments taken by unmanned aerial vehicles (UAV) is one of the most challenging tasks in computer vision and remote sensing communities, with various applications ranging from traffic congestion surveillance to vision systems in intelligent transportation. Deep learning models, most of which are based on convolutional neural networks (CNNs), have been commonly used to automatically detect objects in UAV images. However, the detection accuracy is still often unsatisfactory due to the shortcomings of CNNs. For instance, CNN collects data from nearby pixels, but spatial information is lost due to the pooling operations. As such, it is difficult for CNNs to model certain long-range dependencies. In this thesis, we propose a Swin Transformer-based model that incorporates convolutions with the Swin Transformer to extract more local information, mitigating the problem of small object detection from complex backgrounds in UAV images and further improving the detection accuracy. By using the Swin Transformer, our model leverages both the local feature extraction of convolutions and the global feature modeling of transformers. The framework was designed with two main modules, a local context enhancement (LCE) module and a Residual U-Feature Pyramid Network (RSU-FPN) module. The LCE module is used to implement dilated convolution and increase the receptive field of each image pixel. By combining with the Swin Transformer block, it can efficiently encode various spatial contextual information and detect local associations and structural information within UAV images. In addition, the RSU-FPN module is designed as a two-level nested U-shaped structure with skip connections to integrate multi-scale feature maps. A loss function combining normalized Gaussian Wasserstein distance and L1 loss is also introduced, which allows the model to be trained using imbalanced data. The proposed method was compared with the state-of-the-art methods on the UAVDT dataset and Vis-Drone dataset. Our experimental results obtained on the UAVDT dataset indicated that our proposed method increased the average precision (AP) by 21.6%, 22.3% and 25.5% over Cascade R-CNN, PVT and Dynamic R-CNN detectors, respectively, demonstrating its effectiveness and reliability on small object detection from UAV images.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.titleDetection of Small Objects in UAV Images via an Improved Swin Transformer-based Modelen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentSystems Design Engineeringen
uws-etd.degree.disciplineSystem Design Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorLi, Jonathan
uws.contributor.advisorXu, Linlin
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages