Many fields of application could benefit from an accurate registration of measurements of different modalities over a known 3D model. However, aligning a 2D image to a 3D model is a challenging task and is even more complex when the two have a different modality. Most of the 2D/3D registration methods are based on either geometric or dense visual features.
Both have their own advantages and their own drawbacks. We propose, in this paper, to mutually exploit the advantages of one feature type to reduce the drawbacks of the other one. For this, an hybrid registration framework has been designed to mutually align geometrical and dense visual features in order to obtain an accurate final 2D/3D alignment. We evaluate and compare the proposed registration method on real data acquired by a robot equipped with several visual sensors. The results highlights the robustness of the method and its ability to produce wide convergence domain and a high registration accuracy.
This work has been presented in the 2018 IEEE International Conference on Robotics and Automation (ICRA), May 21-25, 2018, Brisbane, Australia and included in the conference proceedings. Multimodal 2D Image to 3D Model Registration via a Mutual Alignment of Sparse and Dense Visual Features, Nathan Crombez, Ralph Seulin, Olivier Morel, David Fofi and Cédric Demonceaux, International Conference on Robotics and Automation (ICRA), (2018).