UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Mobile Robot Positioning via Visual and Inertial Sensor Fusion

Loading...
Thumbnail Image

Date

2024-05-16

Authors

Reginald, Niraj Niranjan

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

A fundamental prerequisite of mobile robots is the ability to accurately localize itself in a given environment. Accurate localization information is vital for a mobile robotic agent where different modules such and motion planning and control rely upon. Global Navigation Satellite systems (GNSS) are a popular mechanism to obtain geolocation of a robot in outdoor environments. However, GNSS systems can be unreliable in indoor/outdoor environments where GNSS signals struggle to penetrate, such as urban canyons, indoor environments, tunnels and underground infrastructure etc. Therefore, localization by means of other sensory measurements and techniques in a requirement. The main purpose of this research is to develop an accurate robot localization system via multi-sensor fusion from available sensory information such as visual, inertial, and wheel encoder measurements. As a solution to this requirement, the fusion of monocular visual, inertial, and wheel encoder measurements has recently gained immense interest as a robot odometry and localization approach that overcomes the effects of navigation system uncertainties and variations. However, the wheel encoder measurements fused to the visual inertial wheel odometry (WO) system in this approach can be faulty mainly due to wheel slippages and other inherent errors. This thesis proposes a strategy for compensating wheel slip effects, based on a differential drive robot kinematics model. We use Gaussian process regression to learn the error between the WO model and the ground-truth for a set of training sequences. A deep kernel is constructed leveraging Long short term memory (LSTM) networks to capture the sequential correlations of odometry error residual. The learned WO error information is then used on the test sequences to correct the errors in WO. Then, the corrected WO measurements are utilized in a multi-state-constraint Kalman Filter based robot state estimation scheme. The enhancement is demonstrated via simulation experiments based on real-world data sets and indoor experimental evaluations using a test platform mobile robot. In addition, the visual measurements are corrected via a feature point confidence estimator design to discard dynamic features in the feature matching process and subsequently for motion estimation. The development of the estimator design comprises of estimating the fundamental matrix using an inertial measurement unit to geometrically verify the matched confidence of visual key-points. Simulation results based on real world data sets confirm the improved accuracy of the overall designed localization scheme.

Description

Keywords

multi-state constrained Kalman filter, visual inertial wheel odometry, wheel slip compensation, dynamic point elimination

LC Keywords

Citation