Vision-based Self-Supervised Depth Perception and Motion Control for Mobile Robots
dc.contributor.author | Fan, Xiule | |
dc.date.accessioned | 2022-07-18T14:42:06Z | |
dc.date.available | 2022-07-18T14:42:06Z | |
dc.date.issued | 2022-07-18 | |
dc.date.submitted | 2022-07-08 | |
dc.description.abstract | The advances in robotics have enabled many different opportunities to deploy a mobile robot in various settings. However, many current mobile robots are equipped with a sensor suite with multiple types of sensors. This expensive sensor suite and the computationally complex program to fully utilize these sensors may limit the large-scale deployment of these robots. The recent development of computer vision has enabled the possibility to complete various robotic tasks with simply camera systems. This thesis focuses on two problems related to vision-based mobile robots: depth perception and motion control. Commercially available stereo cameras relying on traditional stereo matching algorithms are widely used in robotic applications to obtain depth information. Although their raw (predicted) disparity maps may contain incorrect estimates, they can still provide useful prior information towards more accurate predictions. We propose a data-driven pipeline to incorporate the raw disparity to predict high-quality disparity maps. The pipeline first utilizes a confidence generation component to identify raw disparity inaccuracies. Then a deep neural network, which consists of a feature extraction module, a confidence guided raw disparity fusion module, and a hierarchical occlusion-aware disparity refinement module, computes the final disparity estimates and their corresponding occlusion masks. The pipeline can be trained in a self-supervised manner, removing the need of expensive ground truth training labels. Experimental results on public datasets show that the pipeline has competitive accuracy with real-time processing rate. The pipeline is also tested with images captured by commercial stereo cameras to demonstrate its effectiveness in improving their raw disparity estimates. After the stereo matching pipeline predicts the disparity maps, they are used by a proposed disparity-based direct visual servoing controller to compute the commanded velocity to move a mobile robot towards its target pose. Many previous visual servoing methods rely on complex and error-prone feature extraction and matching steps. The proposed visual servoing framework follows the direct visual servoing approach which does not require any extraction or matching process. Hence, its performance is not affected by the potential errors introduced by these steps. Furthermore, the predicted occlusion masks are also incorporated in the controller to address the occlusion problem inherited from a stereo camera setup. The performance of the proposed control strategy is verified by extensive simulations and experiments. | en |
dc.identifier.uri | http://hdl.handle.net/10012/18449 | |
dc.language.iso | en | en |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | mobile robots | en |
dc.subject | deep learning | en |
dc.subject | computer vision | en |
dc.subject | motion control | en |
dc.title | Vision-based Self-Supervised Depth Perception and Motion Control for Mobile Robots | en |
dc.type | Master Thesis | en |
uws-etd.degree | Master of Applied Science | en |
uws-etd.degree.department | Mechanical and Mechatronics Engineering | en |
uws-etd.degree.discipline | Mechanical Engineering | en |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 0 | en |
uws.contributor.advisor | Fidan, Baris | |
uws.contributor.advisor | Jeon, Soo | |
uws.contributor.affiliation1 | Faculty of Engineering | en |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |