|dc.description.abstract||Traditional multi-camera systems require a fixed calibration between cameras to provide the solution at the correct scale, which places many limitations on its performance. This thesis investigates the calibration of dynamic camera clusters, or DCCs, where one or more of the cluster cameras is mounted to an actuated mechanism, such as a gimbal or robotic manipulator. Our novel calibration approach parameterizes the actuated mechanism using the Denavit-Hartenberg convention, then determines the calibration parameters which allow for the estimation of the time varying extrinsic transformations between the static and dynamic camera frames. A degeneracy analysis is also presented, which identifies redundant parameters of the DCC calibration system.
In order to automate the calibration process, this thesis also presents two information theoretic methods which selects the optimal calibration viewpoints using a next-best-view strategy. The first strategy looks at minimizing the entropy of the calibration parameters, while the second method selects the viewpoints which maximize the mutual information between the joint angle input and calibration parameters.
Finally, the effective selection of key-frames is also an essential aspect of robust visual navigation algorithms, as it ensures metrically consistent mapping solutions while reducing the computational complexity of the bundle adjustment process. To that end, we propose two entropy based methods which aim to insert key-frames that will directly improve the system's ability to localize. The first approach inserts key-frames based on the cumulative point entropy reduction in the existing map, while the second approach uses the predicted point flow discrepancy to select key-frames which best initialize new features for the camera to track against in the future.
The DCC calibration methods are verified in both simulation and using physical hardware consisting of a 5-DOF Fanuc manipulator and a 3-DOF Aeryon Skyranger gimbal. We demonstrate that the proposed methods are able to achieve high quality calibrations using RMSE pixel error metrics, as well as through analysis of the estimator covariance matrix. The key-frame insertion methods are implemented within the Multi-Camera Parallel Mapping and Tracking (MCPTAM) framework, and we confirm the effectiveness of these approaches using high quality ground truth collected using an indoor positioning system.||en