Show simple item record

dc.contributor.authorMcLaughlin, Evan
dc.date.accessioned2020-08-21 15:24:55 (GMT)
dc.date.available2020-08-21 15:24:55 (GMT)
dc.date.issued2020-08-21
dc.date.submitted2020-08-12
dc.identifier.urihttp://hdl.handle.net/10012/16152
dc.description.abstractCurrent bridge inspection practices are outdated compared to the advanced technologies available today, and there is significant room for improvement. For example, spalls are inspected by visual assessment and delaminations are inspected by sounding for hollow areas in the concrete. This yields coarse size estimation and subjective measuring, which is exacerbated by limited funding. These limitations severely restrict inspection information provided to an engineer, making adequate bridge management difficult and bridge repairs expensive. Current inspection researchers are aware of this problem, and therefore there is significant focus on applying advanced technologies to improve the accuracy and economic efficiency of routine bridge inspections for improved bridge management. The Structural Dynamics Identification and Control (SDIC) research lab at the University of Waterloo has been working to develop a process for automated end-to-end inspection of spalls and delaminations in reinforced concrete bridges that tightens size estimation, removes subjectivity, and improves accessibility. This process combines the accessibility benefits of robotics with the detailed 3D structural modelling of state-of-the-art simultaneous localization and mapping (SLAM), and the accurate and objective object labeling of state-of-the-art convolutional neural networks (CNN). Major steps required for this automated end-to-end inspection can be broadly divided into five components: 1) a mobile data collection platform complete with lidar and camera sensors, 2) a mapping component to fuse data from various sensors into a common reference frame, 3) a defect labeling component to automatically label defects in images, 4) a map labeling component to semantically enrich the 3D map with pixel information from images, and 5) a non-subjective and automated defect quantification component. The work in this thesis focuses specifically on components 3), 4), and 5). These three components assume that data is collected by lidar and camera sensors (Component 1) and a 3D map of the bridge structure has been generated by SLAM (Component 2). To achieve component 3, this thesis presents an implementation of MobileNetV2/Deeplab V3, which is a state-of-the-art pixel-wise CNN, for fully automated pixel-wise labeling of spalls and delaminations in visual and infrared images respectively. Spalls are labeled with 71.4% mean intersection over union (mIoU) and delaminations with 82.7% mIoU, which is reasonable compared to same CNN's score of 77.3% on benchmark datasets. For component 4, an algorithm is developed based on the pinhole camera model and ray-tracing to intelligently fuse the CNN and colour data stored in pixels with the generated 3D point cloud. This yields a spatially accurate 3D map of the scanned structure that is colourized and semantically enriched with defect information. This enables the last component, which implements an algorithm to automatically extract, organize, and quantify areas for both spall and delamination defects present in the semantically labeled 3D map. A comparison is performed to test the difference of using manual ground truth defect labels in the image versus automated CNN labels, with all else held constant. The comparison showed a defect size error of 25.9% for spalls and 13.6% for delaminations, which is proportional to the 28.6% and 17.3% mIoU errors reported for spalls and delamintions respectively at the image labeling step. This is evidence that this pipeline can be used for any defect area quantification, where the optimal CNN can be chosen for automated labeling based on a tradeoff of accuracy vs. computation requirements. Future work involves extending this pipeline to include more defect quantification, such as crack length and crack width.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectdeep learningen
dc.subjectcomputer visionen
dc.subjectbridge inspectionen
dc.subjectroboticsen
dc.subjectdelaminationen
dc.subjectspallen
dc.titleA Deep Learning Approach for Automating Concrete Bridge Defect Assessment Using Computer Visionen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentCivil and Environmental Engineeringen
uws-etd.degree.disciplineCivil Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws.contributor.advisorNarasimhan, Sriram
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages