The Library will be performing maintenance on UWSpace on October 2nd, 2024. UWSpace will be offline for all UW community members during this time.
 

Mobile Robot Positioning via Visual and Inertial Sensor Fusion

dc.contributor.authorReginald, Niraj Niranjan
dc.date.accessioned2024-05-16T20:36:12Z
dc.date.issued2024-05-16
dc.date.submitted2024-05-10
dc.description.abstractA fundamental prerequisite of mobile robots is the ability to accurately localize itself in a given environment. Accurate localization information is vital for a mobile robotic agent where different modules such and motion planning and control rely upon. Global Navigation Satellite systems (GNSS) are a popular mechanism to obtain geolocation of a robot in outdoor environments. However, GNSS systems can be unreliable in indoor/outdoor environments where GNSS signals struggle to penetrate, such as urban canyons, indoor environments, tunnels and underground infrastructure etc. Therefore, localization by means of other sensory measurements and techniques in a requirement. The main purpose of this research is to develop an accurate robot localization system via multi-sensor fusion from available sensory information such as visual, inertial, and wheel encoder measurements. As a solution to this requirement, the fusion of monocular visual, inertial, and wheel encoder measurements has recently gained immense interest as a robot odometry and localization approach that overcomes the effects of navigation system uncertainties and variations. However, the wheel encoder measurements fused to the visual inertial wheel odometry (WO) system in this approach can be faulty mainly due to wheel slippages and other inherent errors. This thesis proposes a strategy for compensating wheel slip effects, based on a differential drive robot kinematics model. We use Gaussian process regression to learn the error between the WO model and the ground-truth for a set of training sequences. A deep kernel is constructed leveraging Long short term memory (LSTM) networks to capture the sequential correlations of odometry error residual. The learned WO error information is then used on the test sequences to correct the errors in WO. Then, the corrected WO measurements are utilized in a multi-state-constraint Kalman Filter based robot state estimation scheme. The enhancement is demonstrated via simulation experiments based on real-world data sets and indoor experimental evaluations using a test platform mobile robot. In addition, the visual measurements are corrected via a feature point confidence estimator design to discard dynamic features in the feature matching process and subsequently for motion estimation. The development of the estimator design comprises of estimating the fundamental matrix using an inertial measurement unit to geometrically verify the matched confidence of visual key-points. Simulation results based on real world data sets confirm the improved accuracy of the overall designed localization scheme.en
dc.identifier.urihttp://hdl.handle.net/10012/20568
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectmulti-state constrained Kalman filteren
dc.subjectvisual inertial wheel odometryen
dc.subjectwheel slip compensationen
dc.subjectdynamic point eliminationen
dc.titleMobile Robot Positioning via Visual and Inertial Sensor Fusionen
dc.typeDoctoral Thesisen
uws-etd.degreeDoctor of Philosophyen
uws-etd.degree.departmentMechanical and Mechatronics Engineeringen
uws-etd.degree.disciplineMechanical Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo2025-05-16T20:36:12Z
uws-etd.embargo.terms1 yearen
uws.contributor.advisorFidan, Baris
uws.contributor.advisorHashemi, Ehsan
uws.contributor.affiliation1Faculty of Engineeringen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: