Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

General Research Article

Position and Pose Measurement of 3-PRS Ankle Rehabilitation Robot Based on Deep Learning

Author(s): Guoqiang Chen *, Hongpeng Zhou, Junjie Huang, Mengchao Liu and Bingxin Bai

Volume 15, Issue 2, 2022

Published on: 31 August, 2020

Page: [284 - 297] Pages: 14

DOI: 10.2174/2666255813999200831102550

Price: $65

Abstract

Introduction: The position and pose measurement of the rehabilitation robot plays a very important role in patient rehabilitation movement, and the non-contact real-time robot position and pose measurement is of great significance. Rehabilitation training is a relatively complicated process, so it is very important to detect the training process of the rehabilitation robot in real-time and its accuracy. The method of deep learning has a very good effect on monitoring the rehabilitation robot state.

Methods: The structure sketch and the 3D model of the 3-PRS ankle rehabilitation robot are established, and the mechanism kinematics is analyzed to obtain the relationship between the driving input - the three slider heights - and the position and pose parameters. The whole network of the position and pose measurement is composed of two stages: (1) measuring the slider heights using the convolutional neural network (CNN) based on the robot image and (2) calculating the position and pose parameter using the backpropagation neural network (BPNN) based on the measured slider heights from the CNN. According to the characteristics of continuous variation of the slider heights, a CNN with regression is proposed and established to measure the robot slider height. Based on the data calculated by using the inverse kinematics of the 3-PRS ankle rehabilitation robot, a BPNN is established to solve the forward kinematics for the position and pose.

Results: The experimental results show that the regression CNN measures the slider height and then the BPNN accurately measures the corresponding position and pose. Eventually, the position and pose parameters are obtained from the robot image. Compared with the traditional robot position and pose measurement method, the proposed method has significant advantages.

Conclusion: The proposed 3-PRS ankle rehabilitation position and pose method can not only reduce the experiment period and cost, but also has excellent timeliness and precision. The proposed approach can help the medical staff to monitor the status of the rehabilitation robot and help the patient in rehabilitation training.

Discussion: The goal of the work is to construct a new position and pose detection network based on the combination of the regression CNN and the BPNN. The main contribution is to measure the position and pose of the 3-PRS ankle rehabilitation robot in real-time, which improves the measurement accuracy and the efficiency of the medical staff work.

Keywords: Position and pose, rehabilitation robot, regression CNN, BPNN, position and pose measurement, rehabilitation training.

Graphical Abstract
[1]
J. Iqbal, Z.H. Khan, and A. Khalid, "Prospects of robotics in food industry", Food Sci. Technol. (Campinas), vol. 37, no. 2, pp. 159-165, 2017.
[http://dx.doi.org/10.1590/1678-457x.14616]
[2]
A. Gasparetto, and L. Scalera, "From the unimate to the delta robot: The early decades of industrial robotics", Explorations in the History and Heritage of Machines and Mechanisms, vol. 37, pp. 284-295, 2019.
[http://dx.doi.org/10.1007/978-3-030-03538-9_23]
[3]
Y. Atsuki, H. Sora, and Y. Kei, "Application of 5th generation mobile communication system to industrial robot control", IEEE Transactions on Industry Applications, vol. 140, no. 4, pp. 314-325, 2020.
[http://dx.doi.org/10.1541/ieejias.140.314]
[4]
T.K. Morimoto, J.D. Greer, E.W. Hawkes, M.H. Hsieh, and A.M. Okamura, "Toward the design of personalized continuum surgical robots", Ann. Biomed. Eng., vol. 46, no. 10, pp. 1522-1533, 2018.
[http://dx.doi.org/10.1007/s10439-018-2062-2] [PMID: 29855755]
[5]
N. Berezny, D. Dowlatshahi, and M. Ahmadi, "Interaction control and haptic feedback for a lower-limb rehabilitation robot with Virtual environments", In: Proceedings of the 6th International Conference of Control, Dynamic Systems, and Robotics, 2019, p. 145.
[http://dx.doi.org/10.11159/cdsr19.145]
[6]
C. Lauretti, F. Cordella, and C. Tamantini, "A surgeon-robot shared control for ergonomic pedicle screw fixation", IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 2554-2561, 2020.
[http://dx.doi.org/10.1109/LRA.2020.2972892]
[7]
G.Q. Chen, and Z.F. Yang, "Finite element analysis of 3-PRS parallel mechanism", J. Henan Polytechnic University, vol. 37, no. 4, pp. 82-89, 2018.
[8]
P.C. Yang, Design of 3-PRS Ankle Rehabilitation Robot and Study on Control Strategy., Henan Polytechnic University, 2019.
[9]
S.X. Ren, Research of the Parallel robot’s Kinematic Calibration and pose Tracking based on vision., Harbin Institute of Technology, 2017.
[10]
M.R. Driels, and W.E. Swayze, "Automated partial pose measurement system for manipulator calibration experiments", IEEE Transactions on Robotics and Automation, vol. 10, no. 4, pp. 430-440, 1994.
[http://dx.doi.org/10.1109/70.313094]
[11]
J. Galarza, E. Pérez, E. Serrano, A. Tapia, and W.G. Aguilar, "Pose estimation based on monocular visual odometry and lane detection for intelligent vehicles", In: Proceedings of the 6th International Conference on Augmented Reality, Virtual Reality and Computer Graphics, 2018, pp. 562-566.
[http://dx.doi.org/10.1007/978-3-319-95282-6_40]
[12]
J.A. Marvel, and R. Norcross, "Implementing speed and separation monitoring in collaborative robot workcells", Robot. Comput.-Integr. Manuf., vol. 44, pp. 144-155, 2017.
[http://dx.doi.org/10.1016/j.rcim.2016.08.001] [PMID: 27885312]
[13]
J.P. Lomaliza, and H. Park, Initial pose estimation of 3D object with severe occlusion using deep learningInternational Conference on Advanced Concepts for Intelligent Vision Systems, 2020, pp. 325-336.
[http://dx.doi.org/10.1007/978-3-030-40605-9_28]
[14]
D. F. DeMenthon, Computer Vision System For Accurate Monitoring of object pose. US5388059, February 7, 1995.
[15]
S. Akbarian, G. Delfi, K. Zhu, A. Yadollahi, and B. Taati, "Automated non-contact detection of head and body positions during sleep", IEEE Access, vol. 7, pp. 72826-72834, 2019.
[16]
G. N. Haven, Non-contact method and system for controlling an industrial automation machine. US15963597, October 31 2019.
[17]
P. Renaud, N. Andreff, J.M. Lavest, and M. Dhome, "Simplifying the kinematic calibration of parallel mechanisms using vision-based metrology", IEEE Transactions on Robotics and Automation, vol. 22, no. 1, pp. 12-22, 2006.
[http://dx.doi.org/10.1109/TRO.2005.861482]
[18]
S. Bellakehal, N. Andreff, Y. Mezouar, and M. Tadjine, "Vision/force control of parallel robots", Mechanism Mach. Theory, vol. 46, no. 10, pp. 1376-1395, 2011.
[http://dx.doi.org/10.1016/j.mechmachtheory.2011.05.010]
[19]
E. Coronado, M. Maya, A. Cardenas, O. Guarneros, and D. Piovesan, "Vision-based control of a delta parallel robot via linear camera-space manipulation", J. Intell. Robot. Syst., vol. 85, no. 1, pp. 93-106, 2017.
[http://dx.doi.org/10.1007/s10846-016-0413-5]
[20]
K.N. Lwin, M. Myint, and N. Mukada, "Sea docking by dual-eye pose estimation with optimized genetic algorithm parameters", J. Intell. Robot. Syst., vol. 96, no. 2, pp. 245-266, 2019.
[http://dx.doi.org/10.1007/s10846-018-0970-x]
[21]
A. Moldagalieva, D. Fadeyev, and A. Kuzdeuov, Computer vision-based pose estimation of tensegrity robots using fiducial markersIEEE/SICE International Symposium on System Integration (SII), 2019, pp. 478-483.
[http://dx.doi.org/10.1109/SII.2019.8700452]
[22]
E. Ivorra, M. Ortega, and M. Alcañiz, Multimodal computer vision framework for human assistive roboticsIEEE Workshop on Metrology for Industry 4.0 and IoT, 2018, pp. 1-5.
[http://dx.doi.org/10.1109/METROI4.2018.8428330]
[23]
L. Yang, S. Wu, Z. Lv, and F. Lu, Research on manipulator grasping method based on visionMATEC Web of Conferences, vol. 309. 2020, p. 04004.
[http://dx.doi.org/10.1051/matecconf/202030904004]
[24]
M. Yoshimura, M.M. Marinho, K. Harada, and M. Mitsuishi, Single-shot pose estimation of surgical robot instruments’ shafts from monocular endoscopic images., ArXiv Preprint ArXiv, 2020, pp. 9960-9966.
[25]
S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection", Int. J. Robot. Res., vol. 37, no. 4-5, pp. 421-436, 2018.
[http://dx.doi.org/10.1177/0278364917710318]
[26]
O. Akgul, H.I. Penekli, and Y. Genc, Applying deep learning in augmented reality trackingInternational Conference on Signal-Image Technology & Internet-Based Systems, 2016, pp. 47-54.
[http://dx.doi.org/10.1109/SITIS.2016.17]
[27]
H.I. Suk, An introduction to neural networks and deep learning., Academic Press, 2017, pp. 3-24.
[http://dx.doi.org/10.1016/B978-0-12-810408-8.00002-X]
[28]
E.J. Topol, "High-performance medicine: the convergence of human and artificial intelligence", Nat. Med., vol. 25, no. 1, pp. 44-56, 2019.
[http://dx.doi.org/10.1038/s41591-018-0300-7] [PMID: 30617339]
[29]
D. Yoshisaki, M. Uneda, K. Shibuya, T. Miyashita, and K.I. Ishkawa, "Proposal of intelligent polishing system by artificial intelligence using neural networks", J. Jpn. Soc. Precis. Eng., vol. 86, no. 1, pp. 80-86, 2020.
[30]
R.R. Wildeboer, R.J.G. van Sloun, H. Wijkstra, and M. Mischi, "Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods", Comput. Methods Programs Biomed., vol. 189, p. 105316, 2020.
[http://dx.doi.org/10.1016/j.cmpb.2020.105316] [PMID: 31951873]
[31]
U. Asif, M. Bennamoun, and F. Sohel, "Real-time pose estimation of rigid objects using RGB-D imagery", In: Proceedings of the 8th Conference on Industrial Electronics and Applications, 2013, pp. 1692-1699.
[http://dx.doi.org/10.1109/ICIEA.2013.6566641]
[32]
R.B. Rusu, N. Blodow, and M. Beetz, "Fast point feature histograms (FPFH) for 3D registration", In: Proceedings of the International Conference on Robotics and Automation, 2009, pp. 3212-3217.
[http://dx.doi.org/10.1109/ROBOT.2009.5152473]
[33]
N. Oda, and M. Yamazaki, "An approach to balance and posture estimation using image feature points for biped walking robot", In: Proceedings of the 14th International Workshop on Advanced Motion Control (AMC), 2016, pp. 504-509.
[http://dx.doi.org/10.1109/AMC.2016.7496400]
[34]
S. Hinterstoisser, V. Lepetit, and S. Ilic, "Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes", In: Proceedings of the 11th Asian Conference on Computer Vision, 2012, pp. 548-562.
[http://dx.doi.org/10.1007/978-3-642-33885-4_60]
[35]
R. Rios-Cabrera, and T. Tuytelaars, "Discriminatively trained templates for 3d object detection: A real time scalable approach", In: Proceedings of the 6th IEEE International Conference on Computer Vision, 2013, pp. 2048-2055.
[http://dx.doi.org/10.1109/ICCV.2013.256]
[36]
G.Q. Gao, Q. Zhang, and S. Zhang, "Pose detection of parallel robot based on improved RANSAC algorithm", Meas. Control, vol. 52, no. 7-8, pp. 855-868, 2019.
[http://dx.doi.org/10.1177/0020294019847712]
[37]
R. Chellal, L. Cuvillon, and E. Laroche, "A kinematic vision-based position control of a 6-DoF cable-driven parallel robot", Cable-driven Parallel Robots, vol. 32, pp. 213-225, 2015.
[http://dx.doi.org/10.1007/978-3-319-09489-2_15]
[38]
M.B. Alatise, and G.P. Hancke, "Pose estimation of a mobile robot based on fusion of IMU data and vision data using an extended Kalman filter", Sensors (Basel), vol. 17, no. 10, p. 2164, 2017.
[http://dx.doi.org/10.3390/s17102164] [PMID: 28934102]
[39]
C.J. Liang, K.M. Lundeen, W. McGee, and C.C. Menassa, "Stacked hourglass networks for markerless pose estimation of articulated construction robots", In: Proceedings of the 20th International Symposium on Automation and Robotics in Construction, 2018, p. 35.
[http://dx.doi.org/10.22260/ISARC2018/0120]
[40]
F. Zhou, Z. Chi, C. Zhuang, and D. Han, "3D pose estimation of robot arm with rgb images based on deep learning", In: Proceedings of the 12th International Conference on Intelligent Robotics and Applications, 2019, pp. 541-553.
[http://dx.doi.org/10.1007/978-3-030-27538-9_46]
[41]
T. Gulde, D. Ludl, and C. Curio, "RoPose: CNN-based 2D pose estimation of industrial robots", In: Proceedings of the 14th International Conference on Automation Science and Engineering, 2018, pp. 463-470.
[http://dx.doi.org/10.1109/COASE.2018.8560564]
[42]
M. Castro, J.C. Vera, M. Ferre, and A. Masi, Object detection and 6d pose estimation for precise robotic manipulation in unstructured environmentsInternational Conference on Informatics in Control, Automation and Robotics, vol. Vol. 495. 2020, pp. 392-403.
[http://dx.doi.org/10.1007/978-3-030-11292-9_20]
[43]
G.J. Sun, and H.Y. Lin, Robotic grasping using semantic segmentation and primitive geometric model based 3D pose estimation2020 IEEE/SICE International Symposium on System Integration (SII), 2020, pp. 337-342.
[http://dx.doi.org/10.1109/SII46433.2020.9026297]
[44]
Y. Mae, J. Choi, and H. Takahashi, "Interoperable vision component for object detection and 3D pose estimation for modularized robot control", Mechatronics, vol. 21, pp. 983-992, 2011.
[http://dx.doi.org/10.1016/j.mechatronics.2011.03.008]
[45]
L. J. Williams, Method and system for determining precise robotic position and orientation using near-simultaneous radio frequency measurements. US10539647, 2020.
[46]
A. Doumanoglou, V. Balntas, R. Kouskouridas, and T.K. Kim, "Siamese regression networks with efficient mid-level feature extraction for 3D object pose estimation", arXiv preprint arXiv:1607.02257, 2016.
[47]
"H. Fu, M. Gong, C. Wang, K. H. Batmanghelich, and D. C. Tao, "Deep ordinal regression network for monocular depth estimation",", In: Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2002-2011.
[http://dx.doi.org/10.1109/CVPR.2018.00214]
[48]
"A. Kadkhodamohammadi, and N. Padoy, "A generalizable approach for multi-view 3D human pose regression"", arXiv preprint arXiv: 1804.0462, 2018.
[49]
M. Stojcic, A. Stjepanovic, and D. Stjepanovic, "ANFIS model for the prediction of generated electricity of photovoltaic modules", Decision Making: Applications in Management and Engineering, vol. 2, no. 1, pp. 35-48, 2019.
[50]
S. Sremac, I. Tanackov, M. Kopic, and D. Radovic, "ANFIS model for determining the economic order quantity", Decision Making: Applications in Management and Engineering, vol. 1, no. 2, pp. 81-92, 2018.
[http://dx.doi.org/10.31181/dmame1802079s]
[51]
Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", arXiv preprint arXiv: 1711.00199, 2017.
[52]
A. Doumanoglou, R. Kouskouridas, S. Malassiotis, and T.K. Kim, "Recovering 6D object pose and predicting next-best-view in the crowd", In: Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3583-3592.
[http://dx.doi.org/10.1109/CVPR.2016.390]
[53]
S. Tulsiani, and J. Malik, "Viewpoints and keypoints", In: Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1510-1519.
[54]
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A.C. Berg, "Ssd: Single shot multibox detector", In: Proceedings of the 14th European conference on computer vision, 2016, pp. 21-37.
[55]
Q. H. Tran, M. Chandraker, and H. J. Kim, Autonomous Vehicle Utilizing Pose Estimation. US16100462, February 28, 2019.
[56]
S. C. Tangirala, I. W. K. Feldman, and C. H. Debrunner, Estimating position and orientation of an underwater vehicle based on correlated sensor data. US8965682, February 24, 2015.
[57]
D.C. Luvizon, D. Picard, and H. Tabia, "Multi-task deep learning for real-time 3D human pose estimation and action recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 8, pp. 2752-2764, 2021.
[http://dx.doi.org/10.1109/TPAMI.2020.2976014] [PMID: 32091993]
[58]
J. Malik, I. Abdelaziz, and A. Elhayek, "HandVoxNet: Deep voxel-based network for 3D hand shape and pose estimation from a Single depth map", arXiv preprint arXiv. 2004.01588, 2020.
[59]
K. Khan, N. Ahmad, F. Khan, and I. Syed, "A framework for head pose estimation and face segmentation through conditional random fields", Signal Image Video Process., vol. 14, no. 1, pp. 159-166, 2020.
[http://dx.doi.org/10.1007/s11760-019-01538-w]
[60]
X. Wang, X. Chu, and W. Ouyang, Method and system for pose estimation. US16089590, April 25, 2019.
[61]
J.M.D. Barros, B. Mirbach, and F. Garcia, "Fusion of keypoint tracking and facial landmark detection for real-time head pose estimation", In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 2028-2037.
[http://dx.doi.org/10.1109/WACV.2018.00224]
[62]
R. Ranjan, V. M. Patel, and R. Chellappa, Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. US10860837, December 8, 2020.
[63]
Q. Li, R. Hu, Y. Chen, and Y. Chen, Vehicle pose estimation using mask matchingICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 1972-1976.
[http://dx.doi.org/10.1109/ICASSP.2019.8683429]
[64]
F. Gao, T. Huang, J. Sun, J. Wang, A. Hussain, and E. Yang, "A new algorithm for SAR image target recognition based on an improved deep convolutional neural network", Cognit. Comput., vol. 11, no. 6, pp. 809-824, 2019.
[http://dx.doi.org/10.1007/s12559-018-9563-z]
[65]
N. Kriegeskorte, "Deep neural networks: A new framework for modeling biological vision and brain information processing", Annu. Rev. Vis. Sci., vol. 1, pp. 417-446, 2015.
[http://dx.doi.org/10.1146/annurev-vision-082114-035447] [PMID: 28532370]
[66]
N. Aloysius, and M. Geetha, "A review on deep convolutional neural networks", In: Proceedings of the 6th International Conference on Communication and Signal Processing, 2017, pp. 0588-0592.
[http://dx.doi.org/10.1109/ICCSP.2017.8286426]
[67]
T.D. Pereira, D.E. Aldarondo, L. Willmore, M. Kislin, S.S. Wang, M. Murthy, and J.W. Shaevitz, "Fast animal pose estimation using deep neural networks", Nat. Methods, vol. 16, no. 1, pp. 117-125, 2019.
[http://dx.doi.org/10.1038/s41592-018-0234-5] [PMID: 30573820]
[68]
H. Ranganathan, H. Venkateswara, and S. Chakraborty, Deep Active Learning for Image Regression. Deep Learning Applications., Springer, 2020, pp. 113-135.
[http://dx.doi.org/10.1007/978-981-15-1816-4_7]
[69]
M. Arif Wani, K. Mehmed, and S.M. Moamar, "Trends in Deep Learning Application", Advances in Intelligent Systems and Computing, vol. 1098, pp. 1-7, 2020.
[http://dx.doi.org/10.1007/978-981-15-1816-4_1]
[70]
S. Hayou, A. Doucet, and J. Rousseau, "On the impact of the activation function on deep neural networks training", arXiv preprint arXiv. 1902.06853, 2019.
[71]
K. Kakuda, T. Enomoto, and S. Miura, "Nonlinear activation functions in CNN based on fluid dynamics and its applications", Comput. Model. Eng. Sci., vol. 118, no. 1, pp. 1-14, 2018.
[http://dx.doi.org/10.31614/cmes.2019.04676]
[72]
P. Heny, W. Agus Perdana, and S. Susliansyah, "Sigmoid Activation Function in Selecting the Best Model of Artificial Neural Networks", J. Phys. Conf. Ser., vol. 1471, no. 1, 2020.
[73]
L.R. Campos, P. Nogueira, and E. Nascimento, Tuning a fully convolutional network for velocity model estimationOffshore Technology Conference, 2019.
[http://dx.doi.org/10.4043/29904-MS]
[74]
A.K. Dubey, and V. Jain, "Comparative study of convolution neural network’s Relu and leaky-Relu activation functions", In: Lecture Notes in Electrical Engineering., Singapore Springer: Singapore, 2019, pp. 873-880.
[http://dx.doi.org/10.1007/978-981-13-6772-4_76]
[75]
T. Laurent, and J. Brecht, The multilinear structure of ReLU networksInternational Conference on Machine Learning, 2018, pp. 2908-2916.
[76]
G. Wang, J. Wu, S. Yi, L. Yu, and J. Wang, "Comparison between BP neural network and multiple linear regression method", In: Proceedings of the 1th International Conference on Information Computing and Applications, 2010, pp. 365-370.
[http://dx.doi.org/10.1007/978-3-642-16167-4_47]
[77]
Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio, "Neural networks with few multiplications", arXiv preprint arXiv. 1510.03009, 2015.

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy