Generic placeholder image

International Journal of Sensors, Wireless Communications and Control

Editor-in-Chief

ISSN (Print): 2210-3279
ISSN (Online): 2210-3287

Research Article

Optimized Navigation of Mobile Robots Based on Faster R-CNN in Wireless Sensor Network

Author(s): Alagumuthukrishnan Sevugan, Periyasami Karthikeyan*, Velliangiri Sarveshwaran and Rajesh Manoharan

Volume 12, Issue 6, 2022

Published on: 11 August, 2022

Page: [440 - 448] Pages: 9

DOI: 10.2174/2210327912666220714091426

Price: $65

Abstract

Background: In recent years, deep learning techniques have dramatically enhanced mobile robot sensing, navigation, and reasoning. Due to the advancements in machine vision technology and algorithms, visual sensors have become increasingly crucial in mobile robot applications in recent years. However, due to the low computing efficiency of current neural network topologies and their limited adaptability to the requirements of robotic experimentation, there will still be gaps in implementing these techniques on real robots. It is worth noting that AI technologies are used to solve several difficulties in mobile robotics using visuals as the sole source of information or with additional sensors like lasers or GPS. Over the last few years, many works have already been proposed, resulting in a wide range of methods. They built a reliable environment model, calculated the position of the model, and managed the robot's mobility from one location to another.

Objective: The proposed method aims to detect an object in the smart home and office using optimized, faster R-CNN and improve accuracy for different datasets.

Methods: The proposed methodology uses a novel clustering technique based on faster R-CNN networks, a new and effective method for detecting groups of measurements with a continuous similarity. The resulting communities are coupled with the metric information given by the robot's distance estimation through an agglomerative hierarchical clustering algorithm. The proposed method optimizes ROI layers for generating the optimized features.

Results: The proposed approach is tested on indoor and outdoor datasets, producing topological maps that aid semantic location. We show that the system successfully categorizes places when the robot returns to the same area, despite potential lighting variations. The developed method provides better accuracy than VGG-19 and RCNN methods.

Conclusion: The findings were positive, indicating that accurate categorization can be accomplished even under varying illumination circumstances by adequately designing an area's semantic map. The Faster R-CNN model shows the lowest error rate among the three evaluated models.

Keywords: Convolutional neural network, robot localization, semantic segmentation, mobile robotics, deep learning, clustering.

Graphical Abstract
[1]
De Sousa FLM, Meira NFDC, Oliveira RAR, Silva MC. Deep-Learning-Based Visual Odometry Models for Mobile Robotics. Anais Estendidos do XI Simpósio Brasileiro de Engenharia de Sistemas Computacionais. SBC 2021; pp. 122-7.
[http://dx.doi.org/10.5753/sbesc_estendido.2021.18504]
[2]
Terreran M, Ghidoni S. Light deep learning models enriched with entangled features for RGB-D semantic segmentation. Robot Auton Syst 2021; 146: 103862.
[http://dx.doi.org/10.1016/j.robot.2021.103862]
[3]
Balaska V, Bampis L, Boudourides M, Gasteratos A. Unsupervised semantic clustering and localization for mobile robotics tasks. Robot Auton Syst 2020; 131: 103567.
[http://dx.doi.org/10.1016/j.robot.2020.103567]
[4]
Cebollada S, Payá L, Flores M, Peidró A, Reinoso O. A state-of-the-art review on mobile robotics tasks using artificial intelligence and visual data. Expert Syst Appl 2021; 167: 114195.
[http://dx.doi.org/10.1016/j.eswa.2020.114195]
[5]
Li T, Chang X, Wu Z, et al. Autonomous collision-free navigation of microvesicles in complex and dynamically changing environments. ACS Nano 2017; 11(9): 9268-75.
[http://dx.doi.org/10.1021/acsnano.7b04525] [PMID: 28803481]
[6]
Badrinarayanan V, Kendall A, Cipolla R. Signet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 2017; 39(12): 2481-95.
[http://dx.doi.org/10.1109/TPAMI.2016.2644615] [PMID: 28060704]
[7]
Paszke A, Chaurasia A, Kim S, Culurciello E. Enet: A deep neural network architecture for real-time semantic segmentation arXiv preprint arXiv:160602147 2016.
[8]
Anderson P, Wu Q, Teney D, Bruce J, Johnson M, Sünderhauf N. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 18-23 June 2018; Salt Lake City, UT, USA, IEEE. pp. 3674-83.
[http://dx.doi.org/10.1109/CVPR.2018.00387]
[9]
Mansouri SS, Karvelis P, Kanellakis C, Kominiak D, Nikolakopoulos G. Vision-based MAV navigation in underground mine using convolutional neural network. In: IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society; 14-17 October 2019; Lisbon, Portugal, IEEE. pp. 750-5.
[http://dx.doi.org/10.1109/IECON.2019.8927168]
[10]
Kunii Y, Kovacs G, Hoshi N. Mobile robot navigation in natural environments using robust object tracking. 2017 IEEE 26th international symposium on industrial electronics (ISIE). 1747-52.
[http://dx.doi.org/10.1109/ISIE.2017.8001512]
[11]
Silva MC, de Sousa FLM, Barbosa DLM, Oliveira RAR. Constraints and challenges in designing applications for industry 4.0: A functional approach. ICEIS. 19-21 June 2017; Edinburgh, UK: IEEE pp. 767-74.
[12]
Klippel E, Oliveira R, Maslov D, Bianchi A, Silva SE, Garrocho C. Towards to an embedded edge AI implementation for longitudinal rip detection in conveyor belt. Anais Estendidos do X Simpósio Brasileiro de Engenharia de Sistemas Computacionais. SBC 2020; pp. 97-102.
[http://dx.doi.org/10.5753/sbesc_estendido.2020.13096]
[13]
Carlucho I, De Paula M, Acosta GG. An adaptive deep reinforcement learning approach for MIMO PID control of mobile robots. ISA Trans 2020; 102: 280-94.
[http://dx.doi.org/10.1016/j.isatra.2020.02.017] [PMID: 32085878]
[14]
Zhou Z, Li L, Fürsterling A, Durocher HJ, Mouridsen J, Zhang X. Learning-based object detection and localization for a mobile robot manipulator in SME production. Robot Comput-Integr Manuf 2022; 73: 102229.
[http://dx.doi.org/10.1016/j.rcim.2021.102229]
[15]
Xie Y, Zhang X, Meng W, et al. Coupled fractional-order sliding mode control and obstacle avoidance of a four-wheeled steerable mobile robot. ISA Trans 2021; 108: 282-94.
[http://dx.doi.org/10.1016/j.isatra.2020.08.025] [PMID: 32863054]
[16]
Meng J, Wang S, Li G, et al. Iterative-learning error compensation for autonomous parking of mobile manipulator in harsh industrial environment. Robot Comput-Integr Manuf 2021; 68: 102077.
[http://dx.doi.org/10.1016/j.rcim.2020.102077]
[17]
Jiang L, Wang S, Xie Y, et al. Anti-disturbance direct yaw moment control of a four-wheeled autonomous mobile robot. IEEE Access 2020; 8: 174654-66.
[http://dx.doi.org/10.1109/ACCESS.2020.3025575]
[18]
Blockchain-based privacy-preserving framework for emerging 6G wireless communications. IEEE Transac Indust Inform 2022; 18(7): 4868-74.
[http://dx.doi.org/10.1109/TII.2021.3107556]
[19]
Wenzel P, Schön T, Leal-Taixé L, Cremers D. Vision-based mobile robotics obstacle avoidance with deep reinforcement learning arXiv preprint arXiv:210304727 2021.
[http://dx.doi.org/10.1109/ICRA48506.2021.9560787]
[20]
Velliangiri S, Rajesh M, Sitharthan R, et al. An efficient lightweight privacy-preserving mechanism for industry 40 based on elliptic curve cryptography. IEEE Transac Indust Inform 2021.
[http://dx.doi.org/10.1109/TII.2021.3139609]
[21]
Wang L, Zhao L, Huo G, et al. Visual semantic navigation based on deep learning for indoor mobile robots. Complexity 2018; 2018: 2018.
[http://dx.doi.org/10.1155/2018/1627185]
[22]
Li T, Ho D, Li C, Zhu D, Wang C, Meng MQ-H. Houseexpo: A large-scale 2d indoor layout dataset for learning-based algorithms on mobile robots. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020; 5839-46.
[http://dx.doi.org/10.1109/IROS45743.2020.9341284]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy