Volume 4, Issue 1
A Novel Multi-Person Pose Estimation Using in Indoor Fitness
- Vol. 4, Issue 1, Pages: 97-111(2023)
DOI:10.47297/taposatWSP2633-456916.20230401
Full txt
Volume 4, Issue 1
Sichuan University
Full txt
Mao, Z & Zhou. (2023). A Novel Multi-Person Pose Estimation Using in Indoor Fitness. Theory and Practice of Science and Technology, 4(1), 97-111.
Mao, Z & Zhou. (2023). A Novel Multi-Person Pose Estimation Using in Indoor Fitness. Theory and Practice of Science and Technology, 4(1), 97-111. DOI: 10.47297/taposatWSP2633-456916.20230401.
In the post-pandemic era
online fitness has become a new trend. Compare to traditional fitness in gym
online fitness is convenient
but lack of action guidance
which cannot guarantee the quality and may lead to the injures. Therefore
to monitor the accuracy of online fitness is required. The existing fitness monitor often relies on special hardware
such as smart TV
depth camera or multi-sensors
which have high cost
inconvenient installation and limited scenarios. With the development of human joint points detection technology
the estimation of human pose can be achieved through a monocular RGB camera
which makes it possible to achieve low cost
high speed and multi-scenarios monitor on laptop or mobile phone. Based on above
this paper proposes a novel multi-person pose estimation method based on YOLOv5 & MediaPipe
using in indoor fitness. We also propose an algorithm to measure human joint points angle in 3D scenario
from a monocular RGB camera
and provide feedback via audio. The prototype is implemented and the performance is verified under 3 fitness poses. The results show that the proposed method and prototype has high accuracy and practical value. CCS CONCEPTS • Insert your first CCS term here • Insert your second CCS term here • Insert your third CCS term here
Computer visionHuman pose estimationYOLOv5MediaPipe
Xia S H, Gao L, Lai Y K, et al. A survey on human performance capture and animation [J]. Journal of Computer Science and Technology, 2017, 32: 536-54. Doi: 10.1007/s11390-017-1742-y.[2] Chang K H, Chen M Y, Canny J. Tracking free-weight exercise [C]. Proceedings of the 9th International Conference on Ubiquitous Computing. Berlin/Heidelberg: Springer-Verlag, 2007: 19-37. Doi: 10.5555/1771592.1771594.[3] Crema C, Depari A, Flammini A, et al. IMU-based solution for automatic detection and classification of exercises in the fitness scenario [C]. 2017 IEEE Sensors Applications Symposium (SAS). IEEE, 2017: 1-6. Doi: 10.1109/SAS.2017.7894068.[4] Ding H, Shangguan L F, Yang Z, et al. FEMO: A platform for free-weight exercise monitoring with RFIDs [C]. Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. Association for Computing Machinery, 2015: 141-54. Doi: 10.1145/2809695.2809708.[5] Zhou B, Sundholm M, Cheng J Y, et al. Never skip leg day: A novel wearable approach to monitoring gym leg exercises [C]. 2016 IEEE International Conference on Pervasive Computing and Communications. IEEE, 2016: 1-9. Doi: 10.1109/PERCOM.2016.7456520.[6] Hao T, Xing G L, Zhou G. RunBuddy: A smartphone system for running rhythm monitoring [C]. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Association for Computing Machinery, 2015: 133-44. Doi: 10.1145/2750858.2804293.[7] Velloso E, Bulling A, Gellersen H, et al. Qualitative activity recognition of weight lifting exercises [C]. Proceedings of 4th Augmented Human International Conference. Association for Computing Machinery. 2013: 116-23. Doi: 10.1145/2459236.2459256.[8] Jin X, Yao Y, Jiang Q L, et al. Virtual personal trainer via the Kinect sensor [C]. 2015 IEEE 16th International Conference on Communication Technology (ICCT). IEEE, 2015: 460-63. Doi: 10.1109/icct.2015.7399879.[9] Kumar P, Saini R, Yadava M, et al. Virtual trainer with real-time feedback using kinect sensor [C]. 2017 IEEE Region 10 Symposium (TENSYMP). IEEE, 2017: 1-5. Doi: 10.1109/TENCONSpring.2017.8070063.[10]Wei S E, Ramakrishna V, Kanade T, et al. Convolutional Pose Machine [C]. 2016 IEEE conference on Computer Vision and Pattern Recognition. IEEE, 2016: 4724-32. Doi: 10.1109/CVPR.2016.511.[11]Newell A, Yang K Y, Dent J. Stacked hourglass networks for human pose estimation [C]. European Conference on Computer Vision (ECCV). Springer, 2016: 483-99. Doi: 10.1007/978-3-319-46484-8_29.[12]He K, Gkioxari G, Dollar P, et al. Mask RCNN [C]. 2017 IEEE International Conference on Computer Vision. IEEE, 2017: 2961-69. Doi: 10.1109/tpami.2018.2844175.[13]Chen Y L, Wang Z C, Peng Y X, et al. Cascaded pyramid network for multi-person pose estimation [C]. 2018 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 7103-12. Doi: 10.1109/CVPR.2018.00742.[14]Li W B, Wang Z C, Yin B Y, et al. Rethinking on multi-stage networks for human pose estimation [Z/OL]. 2019. https://arXiv.org/abs/1901/00148.[15]Fang H S, Xie S Q, Tai Y W, et al. RMPE: Regional multi-person pose estimation [C]. 2017 IEEE International Conference on Computer Vision. IEEE, 2017: 2353-62. Doi: 10.1109/ICCV.2017.256.[16]Sun K, Xiao B, Liu D, et al. Deep high-resolution representation learning for human pose estimation [C]. 2019 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 5686-96. Doi: 10.1109/CVPR.2019.00584.[17]Shotton J, Fitzgibbon A, Cook M, et al. Real-time human pose recognition in parts from single depth images [C]. 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011: 1297-1304. Doi: 10.1109/CVPR.2011.5995316.[18]Haque A, Peng B Y, Luo Z L, et al. Towards viewpoint invariant 3D human pose estimation [C]. European Conference on Computer Vision. Springer, 2016: 160-77. Doi: 10.1007/978-3-319-46448-0_10.[19]Chen C H, Ramanan D. 3D human pose estimation = 2D pose estimation + matching [C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 5759-67. Doi: 10.1109/CVPR.2017.610.[20]Martinez J, Hossain R, Romero J, et al. A simple yet effective baseline for 3D human pose estimation [C]. 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017: 2659-68. Doi: 10.1109/ICCV/2017.288.[21]Culer R A, Neverrova N, Kdkkinos I. DensePose: Dense human pose estimation in the wild [C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 7297-7306.[22]Qiao S, Wang Y L, Li J. Real-time human gesture grading based on OpenPose [C]. 2017 10th international Congress on Image and Signal Processing, BioMedical Engineering and informatics (CISP-BMEI). IEEE, 2017: 1-6. Doi: 10.1109/CISP-BMEI.2017.8301910.[23]Deb S, Sharan A, Chaturvedi S, et al. Interactive dance lessons through human body pose estimation and skeletal topographies matching [J]. International Journal of Computational Intelligence & IoT. 2018, 2(4): 711-16.[24]Yasuhiro Endo, Masashi Miura, Masaaki Sakamoto. The relationship between the deep squat movement and the hip, knee and ankle range of motion and muscle strength [J]. National Center for Biotechnology Information. 2020. 32(6): 391-94. Doi: 10.1589/jpts.32.391.
Related Articles
Related Author
Related Institution