UWSpace staff members will be away from May 5th to 9th, 2025. We will not be responding to emails during this time. If there are any urgent issues, please contact GSPA at gsrecord@uwaterloo.ca. If any login or authentication issues arise during this time, please wait until UWSpace Staff members return on May 12th for support.
 

Finding Specific Industrial Objects in Point Clouds using Machine Learning and Procedural Scene Generation

dc.contributor.advisorHaas, Carl
dc.contributor.advisorNarasimhan, Sriram
dc.contributor.authorLopez Morales, Daniel
dc.date.accessioned2025-01-06T18:26:08Z
dc.date.available2025-01-06T18:26:08Z
dc.date.issued2025-01-06
dc.date.submitted2024-11-27
dc.description.abstractIn the era of Industry 4.0 and the rise of Digital Twins (DT), the demand for enriched point cloud data has grown significantly. Point clouds allow seamless integration into Building Information Modeling (BIM) workflows, offering deeper insights into structures and enhancing the value of documentation, analysis, and asset management processes. However, several persistent challenges limit the current effectiveness of point cloud methods in industrial settings. The first major challenge is the difficulty in identifying specific objects within point clouds. Finding and labeling individual objects in a complex 3D environment is technically demanding and fraught with various issues. Manually processing these point clouds to locate specific objects is labor-intensive, time-consuming, and susceptible to human error. In large-scale industrial environments, the complexity of layouts and the volume of data make these manual methods impractical for efficient and accurate results. The second major challenge lies in the scarcity of industrial point cloud datasets necessary for training machine learning-based segmentation networks. Automating point cloud enrichment through machine learning relies heavily on the availability of high-quality datasets specific to industrial applications. Unfortunately, comprehensive datasets of this kind are either unavailable or proprietary, creating a significant barrier to developing effective segmentation networks. Furthermore, the few current datasets often lack flexibility, being limited only by the areas that have been scanned. This rigidity, combined with the time-consuming process of manually segmenting data, slows down the development and deployment of scalable machine-learning solutions for point cloud segmentation. These limitations highlight the need for more flexible and adaptive solutions to efficiently address object detection, asset tracking, and inventory management in dynamic industrial scenarios. This research addresses these challenges by developing open-access, weight-balanced class datasets specifically designed for 3D point cloud segmentation in industrial environments. The datasets integrate synthetic data with real-world industrial scans, offering a solution to the problem of imbalanced class distributions, which often hinder the accuracy of neural networks. Two methodologies for synthetic datasets were developed, one with random object placement and the second through a procedural generation pipeline, which includes rules for object placement and rules for generating tube structures for industrial elements, filling the scene with various objects of variable geometric features to understand the different effects that make a dataset realistic. This procedural generation technique provides a flexible method for dataset creation that can be adapted for different objects, point cloud scales, point densities, and noise levels. The dataset improves the generalization capabilities of machine learning models, making them more robust in identifying and segmenting objects within industrial settings. The second part of the research presents a methodology for efficiently and accurately identifying specific objects in point cloud scenes and two methodologies for creating open-access industrial datasets designed to train neural networks for segmentation. The first part of the research focuses on the object-finding methodology, which is crucial for multiple applications, including object detection, pose estimation, and asset tracking. Traditional methods struggle with generalization, often failing to differentiate between unique objects and general classes. The proposed methodology for specific object finding utilizes a point transformer network for point cloud segmentation and a fully convolutional geometric features network to enhance geometrical features using color. A key innovation in this process is using a color-based iterative closest point (ICP) algorithm on the output of the fully convolutional geometric features network. This enables precisely matching segmented objects with a point cloud template, ensuring accurate object identification.
dc.identifier.urihttps://hdl.handle.net/10012/21311
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://github.com/danielflopez1/PointCloudForge
dc.subjectsynthetic datasets
dc.subjectdataset generation
dc.subjectprocedrually generated dataset
dc.subjectspecific object finding
dc.subjectmachine learning
dc.subjectindustrial dataset
dc.titleFinding Specific Industrial Objects in Point Clouds using Machine Learning and Procedural Scene Generation
dc.typeDoctoral Thesis
uws-etd.degreeDoctor of Philosophy
uws-etd.degree.departmentCivil and Environmental Engineering
uws-etd.degree.disciplineCivil Engineering
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0
uws.contributor.advisorHaas, Carl
uws.contributor.advisorNarasimhan, Sriram
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
LopezMorales_Daniel.pdf
Size:
6.16 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: