Generic placeholder image

Recent Advances in Electrical & Electronic Engineering

Editor-in-Chief

ISSN (Print): 2352-0965
ISSN (Online): 2352-0973

Research Article

Illumination Robust Loop Closure Detection with the Constraint of Pose

Author(s): Yan Deli*, Tuo Wenkun, Wang Weiming and Li Shaohua

Volume 13, Issue 7, 2020

Page: [1097 - 1106] Pages: 10

DOI: 10.2174/2352096513999200422141150

Price: $65

Abstract

Background: Loop closure detection is a crucial part in robot navigation and simultaneous location and mapping (SLAM). Appearance-based loop closure detection still faces many challenges, such as illumination changes, perceptual aliasing and increasing computational complexity.

Methods: In this paper, we proposed a visual loop closure detection algorithm that combines illumination robust descriptor DIRD and odometry information. In this algorithm, a new distance function is built by fusing the Euclidean distance function and Mahalanobis distance function, which integrates the pose uncertainty of body and can dynamically adjust the threshold of potential loop closure locations. Then, potential locations are verified by calculating the similarity of DIRD descriptors.

Results: The proposed algorithm is evaluated on KITTI and EuRoC datasets, and is compared with SeqSLAM algorithm, which is one of the state of the art loop closure detection algorithms. The results show that the proposed algorithm could effectively reduce the computing time and get better performance on P-R curve.

Conclusion: The new loop closure detection method makes full use of odometry information and image appearance information. The application of the new distance function can effectively reduce the missed detection caused by odometry error accumulation. The algorithm does not require extracting image features or learning stage, and can realize real-time detection and run on the platform with limited computational power.

Keywords: Simultaneous location and mapping (SLAM), illumination robust, visual inertial odometry (VIO), loop closure candidate area, pose constraint.

Graphical Abstract
[1]
J. Gui, D. Gu, and S. Wang, "A review of visual inertial odometry from filtering and optimisation perspectives", Adv. Robot., vol. 29, no. 20, pp. 1289-1301, 2015.
[http://dx.doi.org/10.1080/01691864.2015.1057616]
[2]
H. Durrant-Whyte, and T. Bailey, "Simultaneous localization and mapping: Part I", IEEE Robot. Autom. Mag., vol. 13, no. 2, pp. 99-110, 2006.
[http://dx.doi.org/10.1109/MRA.2006.1638022]
[3]
F. Chaumette, P. Corke, and P. Newman, Editorial special issue on robotic vision, Int. J. Robot. Res., vol. 29, pp. 131-132, 2010..
[http://dx.doi.org/10.1177/0278364909359118]
[4]
G. Hager, M. Hebert, and S. Hutchinson, "Editorial: Special issue on vision and robotics, Parts I and II", Int. J. Comput. Vis., vol. 74, pp. 217-218, 2007.
[http://dx.doi.org/10.1007/s11263-007-0065-9]
[5]
J. Neira, A.J. Davison, and J.J. Leonard, "Guest editorial special issue on visual SLAM", IEEE Trans. Robot., vol. 24, pp. 929-931, 2008.
[http://dx.doi.org/10.1109/TRO.2008.2004620]]
[6]
J.H. Lee, G. Zhang, and J. Lim, "Place recognition using straight lines for vision-based SLAM[C], Robotics and Automation (ICRA), 2013 IEEE International Conference on., 2013",
[7]
R. Paul, and P. Newman, "FAB-MAP 3D: Topological mapping with spatial and visual appearance", Proc. IEEE Int. Conf. Robot. Autom., 2010, pp. 2649-2656.
[http://dx.doi.org/10.1109/ROBOT.2010.5509587]
[8]
P. Newman, G. Sibley, M. Smith, M. Cummins, A. Harrison, C. Mei, I. Posner, R. Shade, D. Schroeter, L. Murphy, W. Churchill, D. Cole, and I. Reid, "Navigating, recognizing and describing urban spaces with vision and lasers", Int. J. Robot. Res., vol. 28, no. 11-12, pp. 1406-1433, 2009.
[http://dx.doi.org/10.1177/0278364909341483]
[9]
M. Cummins, and P. Newman, Appearance-only SLAM at large scale with FAB-MAP 2.0, Int. J. Robot. Res., vol. 30, no. 9, pp. 1100-1123, 2011..
[http://dx.doi.org/10.1177/0278364910385483]
[10]
M. Labbe, and F. Michaud, "Appearance-based loop closure detection for online large-scale and long-term operation", IEEE Trans. Robot., vol. 29, no. 3, pp. 734-745, 2013.
[http://dx.doi.org/10.1109/TRO.2013.2242375]
[11]
M.J. Milford, and G.F. Wyeth, "SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights, In: Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012",
[http://dx.doi.org/10.1109/ICRA.2012.6224623]
[12]
H. Lategahn, J. Beck, and C. Stiller, "DIRD is an illumination robust descriptor", In: 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA , 2014, pp. 756-761.
[http://dx.doi.org/10.1109/IVS.2014.6856421]
[13]
S. Lowry, "Visual place recognition: A survey", IEEE Trans. Robot., vol. 32, no. 1, pp. 1-19, 2016.
[http://dx.doi.org/10.1109/TRO.2015.2496823]
[14]
W. Maddern, M. Milford, and G. Wyeth, "CAT-SLAM: Probabilistic localisation and mapping using a continuous appearance-based trajectory", Int. J. Robot. Res., vol. 31, no. 4, pp. 429-451, 2012.
[http://dx.doi.org/10.1177/0278364912438273]
[15]
J. Sivic, and A. Zisserman, "Video Google: A text retrieval approach to object matching in videos", Proc. IEEE Int. Conf. Comput. Vis., vol. 2, pp. 1470-1477, 2003.
[http://dx.doi.org/10.1109/ICCV.2003.1238663]
[16]
M. Cummins, and P. Newman, "FAB-MAP: Probabilistic localization and mapping in the space of appearance", Int. J. Robot. Res., vol. 27, no. 6, pp. 647-665, 2008.
[http://dx.doi.org/10.1177/0278364908090961]
[17]
M. Cummins, and P. Newman, "Highly scalable appearance-only SLAM-FAB-MAP 2.0", Robot. Sci. Syst. Conf., Seattle, WA, USA , 2009.
[18]
S. Khan, and D. Wollherr, "IBuILD: Incremental bag of binary words for appearance-based loop closure detection", IEEE Int. Conf. Robot. Autom., 2015, pp. 5441-5447, .
[http://dx.doi.org/10.1109/ICRA.2015.7139959]
[19]
E. Garcia-Fidalgo, and A. Ortiz, "iBoW-LCD: An appearance-based loop closure detection approach using incremental bags of binary words", IEEE Robot. Autom. Lett., 2018.
[http://dx.doi.org/10.1109/LRA.2018.2849609]
[20]
N. S¨underhauf, S. Shirazi, A. Jacobson, F. Dayoub, E. Pepperell, B. Upcroft, and M. Milford, Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free , Robot.: Sci. Syst., 2015..
[http://dx.doi.org/10.15607/RSS.2015.XI.022]
[21]
R. Arroyo, P.F. Alcantarilla, L.M. Bergasa, and E. Romera, "Fusion and binarization of CNN features for robust topological localization across seasons", IEEE/RSJ Int. Conf. Intell. Robots Syst., 2016, pp. 4656-4663.
[http://dx.doi.org/10.1109/IROS.2016.7759685]
[22]
L. Bampis, A. Amanatiadis, and A. Gasteratos, "High order visual words for structure-aware and viewpoint-invariant loop closure detection", IEEE/RSJ Int. Conf. Intell. Robots Syst., 2017, pp. 4268-4275.
[http://dx.doi.org/10.1109/IROS.2017.8206289]
[23]
D. Lowe, "Object recognition from local scale-invariant features", Proc. IEEE Int. Conf. Comput. Vis., vol. 2, pp. 1150-1157, 1999.
[24]
H. Bay, T. Tuytelaars, and L. Van Gool, "SURF: Speeded up robust features", Proc. Eur. Conf. Comput. Vis., 2006, pp. 404-417.
[25]
M. Calonder, V. Lepetit, M. Özuysal, T. Trzcinski, C. Strecha, and P. Fua, "BRIEF: Computing a local binary descriptor very fast", IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1281-1298, 2012.
[http://dx.doi.org/10.1109/TPAMI.2011.222 ] [PMID: 22084141]
[26]
E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF", IEEE Int. Conf. Comput. Vision, vol. vol. 95 , 2011, pp. 2564-2571.
[http://dx.doi.org/10.1109/ICCV.2011.6126544]
[27]
A. Oliva, and A. Torralba, Building the gist of a scene: The role of global image features in recognition. Visual Perception-Fundamentals of Awareness: Multi-Sensory Integration and High-Order Perception., Elsevier: New York, NY, USA, 2006, pp. 23-36.
[http://dx.doi.org/10.1016/S0079-6123(06)55002-2]
[28]
H. Lategahn, J. Beck, and B. Kitt, "How to learn an illumination robust image feature for place recognition", IEEE Intelligent Vehicles Symposium, 2013.
[http://dx.doi.org/10.1109/IVS.2013.6629483]
[29]
E. Pepperell, P.I. Corke, and M.J. Milford, "All-environment visual place recognition with SMART", In: 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014.
[http://dx.doi.org/10.1109/ICRA.2014.6907067]
[30]
S. Ouerghi, R. Boutteau, X. Savatier, and F. Tlili, "Visual odometry and place recognition fusion for vehicle position tracking in urban environments", Sensors (Basel), vol. 18, no. 4, pp. 939-957, 2018.
[http://dx.doi.org/10.3390/s18040939] [PMID: 29565310]
[31]
A. Barrau, and S. Bonnabel, "Invariant kalman filtering", In: 2018 International Conference on Information Fusion (FUSION), 2018, pp. 060117-105010.
[32]
M. Milford, and G. Wyeth, "Persistent navigation and mapping using a biologically inspired SLAM system", Int. J. Robot. Res., vol. 29, no. 9, pp. 1131-1153, 2010.
[http://dx.doi.org/10.1177/0278364909340592]
[33]
Weipeng Li, G. Zhang, and E.-L. Yao, An improved loop closure detection algorithm based on the constraint from space position uncertainty", Robot, vol. 38, no. 03, pp. 301-310, 321, 2016..
[34]
A. Geiger, P. Lenz, and C. Stiller, "Vision meets robotics: The KITTI dataset", Int. J. Robot. Res., vol. 32, no. 11, pp. 1231-1237, 2013.
[http://dx.doi.org/10.1177/0278364913491297]
[35]
M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. Achtelik, and R. Siegwart, "The EuRoC micro aerial vehicle datasets", Int. J. Robot. Res., 2016.
[http://dx.doi.org/10.1177/0278364915620033]
[36]
M. Brossard, S. Bonnabel, and A. Barrau, "Invariant Kalman filtering for visual inertial SLAM", In: Proc. IEEE Int. Conf. Information Fusion.(FUSION), Cambridge, UK, 2018, pp. 2021-2028.
[http://dx.doi.org/10.23919/ICIF.2018.8455807]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy