|
[1] M. B. Holte, C. Tran, M. M. Trivedi, and T. B. Moeslund, “Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments,” IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 5, pp. 538–552, 2012. [2] W. Ma, S. Xia, J. K. Hodgins, X. Yang, C. Li, and Z. Wang, “Modeling style and variation in human motion,” in Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ser. SCA ’10. Goslar, DEU: Eurographics Association, 2010, p. 21–30. [3] S. N. Muralikrishna, B. Muniyal, U. D. Acharya, and R. Holla, “Enhanced human action recognition using fusion of skeletal joint dynamics and structural features,” Journal of Robotics, vol. 2020, p. 3096858, Aug 2020. [Online]. Available: https://doi.org/10.1155/2020/3096858 [4] S. Blair, M. J. Lake, R. Ding, and T. Sterzing, “Magnitude and variability of gait characteristics when walking on an irregular surface at different speeds,” Human Movement Science, vol. 59, pp. 112–120, jun 2018. [Online]. Available: https://doi.org/10.1016/j.humov.2018.04.003 [5] C. Xu, Y. Makihara, G. Ogi, X. Li, Y. Yagi, and J. Lu, “The ou-isir gait database comprising the large population dataset with age and performance evaluation of age estimation,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, p. 24, Dec 2017. [6] W. Wei and A. Yunxiao, “Vision-based human motion recognition: A survey,” in 2009 Second International Conference on Intelligent Networks and Intelligent Systems, 2009, pp. 386–389. [7] D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, no. 2, pp. 224–241, 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002171 [8] D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, “Vision-based human activity recognition: a survey,” Multimedia Tools and Applications, vol. 79, no. 41, pp. 30 509–30 555, Nov 2020. [Online]. Available: https://doi.org/10.1007/s11042-020-09004-3 [9] L. M. Dang, K. Min, H. Wang, M. Piran, H. Lee, and H. Moon, “Sensor-based and vision-based human activity recognition: A comprehensive survey,” Pattern Recognition, vol. 108, 07 2020. [10] P.-z. Chen, J. Li, M. Luo, and N.-h. Zhu, “Real-time human motion capture driven by a wireless sensor network,” Int. J. Comput. Games Technol., vol. 2015, 2015 [Online]. Available: https://doi.org/10.1155/2015/695874 [11] S. Liu, J. Zhang, Y. Zhang, and R. Zhu, “A wearable motion capture device able to detect dynamic motion of human limbs,” Nature Communications, vol. 11, no. 1, Nov. 2020. [Online]. Available: https://doi.org/10.1038/s41467-020-19424-2 [12] A. D. Young, “Use of body model constraints to improve accuracy of inertial motion capture,” in 2010 International Conference on Body Sensor Networks, 2010, pp.180–186. [13] H. Wang and C. Schmid, “Action recognition with improved trajectories,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3551–3558. [14] S. Herath, M. T. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” CoRR, vol. abs/1605.04988, 2016. [Online]. Available: http://arxiv.org/abs/1605.04988 [15] A. Bobick and J. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, 2001. [16] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, vol. 2, 2005, pp. 1395–1402 Vol. 2. [17] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” CoRR, vol. abs/1801.07455, 2018. [Online]. Available: http://arxiv.org/abs/1801.07455 [18] O. Taheri, H. Salarieh, and A. Alasti, “Human leg motion tracking by fusing imus and rgb camera data using extended kalman filter,” CoRR, vol. abs/2011.00574, 2020. [Online]. Available: https://arxiv.org/abs/2011.00574 [19] T. Ito, K. Ayusawa, E. Yoshida, and H. Kobayashi, “Evaluation of active wearable assistive devices with human posture reproduction using a humanoid robot,” Advanced Robotics, vol. 32, no. 12, pp. 635–645, 2018. [Online]. Available: https://doi.org/10.1080/01691864.2018.1490200 [20] E. Papi, Y. N. Bo, and A. H. McGregor, “A flexible wearable sensor for knee flexion assessment during gait,” Gait and Posture, vol. 62, pp. 480–483, 2018. [21] L. Fan, Z. Wang, and H. Wang, “Human activity recognition model based on decision tree,” in 2013 International Conference on Advanced Cloud and Big Data, 2013, pp. 64–68. [22] A. Glandon, L. Vidyaratne, N. Sadeghzadehyazdi, N. K. Dhar, J. O. Familoni, S. T. Acton, and K. M. Iftekharuddin, “3d skeleton estimation and human identity recognition using lidar full motion video,” in 2019 International Joint Conference on Neural Networks (IJCNN), 2019, pp. 1–8. [23] J. Zhao, J. Zhou, Y. Yao, D.-a. Li, and L. Gao, “Rf-motion: A devicefree rf-based human motion recognition system,” Wireless Communications and Mobile Computing, vol. 2021, p. 1497503, Mar 2021. [Online]. Available: https://doi.org/10.1155/2021/1497503 [24] G. Hu, B. Cui, and S. Yu, “Joint learning in the spatio-temporal and frequency domains for skeleton-based action recognition,” IEEE Transactions on Multimedia, vol. 22, no. 9, pp. 2207–2220, Sep. 2020. [25] D. Weinland, E. Boyer, and R. Ronfard, “Action recognition from arbitrary views using 3d exemplars,” in 2007 IEEE 11th International Conference on Computer Vision, 2007, pp. 1–7. [26] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 24–38, 2014, visual Understanding and Applications with RGB-D Cameras. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320313000680 [27] C. Ott, D. Lee, and Y. Nakamura, “Motion capture based human motion recognition and imitation by direct marker control,” in Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots, 2008, pp. 399–405. [28] J. P. Vox and F. Wallhoff, “Preprocessing and normalization of 3d-skeleton-data for human motion recognition,” in 2018 IEEE Life Sciences Conference (LSC), 2018, pp. 279–282. [29] Q. Zhang, Y. Yao, D. Zhou, and R. Liu, “Motion key-frame extraction by using optimized t-stochastic neighbor embedding,” Symmetry, vol. 7, no. 2, pp. 395–411, 2015. [Online]. Available: https://www.mdpi.com/2073-8994/7/2/395 [30] A. Richard and J. Gall, “Temporal action detection using a statistical language model,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3131–3140, 2016. [31] G. Evangelidis, G. Singh, and R. Horaud, “Skeletal quads: Human action recognition using joint quadruples,” in 2014 22nd International Conference on Pattern Recognition, 2014, pp. 4513–4518. [32] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context and appearance distribution features,” in CVPR 2011, 2011, pp. 489–496. [33] J. Javed, H. Yasin, and S. F. Ali, “Human movement recognition using euclidean distance: A tricky approach,” in 2010 3rd International Congress on Image and Signal Processing, vol. 1, 2010, Conference Proceedings, pp. 317–321. [34] E. Ohn-Bar and M. M. Trivedi, “Joint angles similarities and hog2 for action recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2013. [35] S. Sempena, Nur Ulfa Maulidevi, and Peb Ruswono Aryan, “Human action recognition using dynamic time warping,” in Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, 2011, pp. 1–5. [36] Y.-H. Chou, H.-C. Cheng, C.-H. Cheng, K.-H. Su, and C.-Y. Yang, “Dynamic time warping for imu based activity detection,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016, pp. 003 107–003 112. [37] L. Brun, P. Foggia, A. Saggese, and M. Vento, “Recognition of human actions using edit distance on aclet strings,” in VISAPP, 2015. [38] F. Zhou and F. Torre, “Canonical time warping for alignment of human behavior,” in Advances in Neural Information Processing Systems, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, Eds., vol. 22. Curran Associates, Inc., 2009. [Online]. Available: https://proceedings.neurips.cc/paper/2009/file/2ca65f58e35d9ad45bf7f3ae5cfd08f1-Paper.pdf [39] Q. Xiao and S. Liu, “Motion retrieval based on dynamic bayesian network and canonical time warping,” in 2015 7th International Conference on Intelligent HumanMachine Systems and Cybernetics, vol. 2, 2015, pp. 182–185. [40] C. Yuan, W. Hu, X. Li, S. Maybank, and G. Luo, “Human action recognition under log-euclidean riemannian metric,” in Computer Vision – ACCV 2009, H. Zha, R.-i. Taniguchi, and S. Maybank, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 343–353. [41] K. Yang and C. Shahabi, “A pca-based similarity measure for multivariate time series,” in Proceedings of the 2nd ACM International Workshop on Multimedia Databases, ser. MMDB ’04. New York, NY, USA: Association for Computing Machinery, 2004, p. 65–74. [Online]. Available: https://doi.org/10.1145/1032604.1032616 [42] M. F. Abdelkader, W. Abd-Almageed, A. Srivastava, and R. Chellappa, “Silhouette-based gesture and action recognition via modeling trajectories on riemannian shape manifolds,” Computer Vision and Image Understanding, vol. 115, no. 3, pp. 439–455, 2011, special issue on Feature-Oriented Image and Video Computing for Extracting Contexts and Semantics. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002377 [43] F. Zhou, F. De la Torre, and J. K. Hodgins, “Hierarchical aligned cluster analysis for temporal clustering of human motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 582–596, 2013. [44] S. H. Joshi, J. Su, Z. Zhang, and B. Ben Amor, Elastic Shape Analysis of Functions, Curves and Trajectories. Cham: Springer International Publishing, 2016, pp. 211–231. [Online]. Available: https://doi.org/10.1007/978-3-319-22957-7_10 [45] F. Zhou and F. De la Torre, “Generalized canonical time warping,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 279-294, 2016. [46] H. C. Mandhare and S. R. Idate, “A comparative study of cluster based outlier detection, distance based outlier detection and density based outlier detection techniques,” in 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), 2017, pp. 931–935. [47] E. Hsu, K. Pulli, and J. Popović, “Style translation for human motion,” in ACM SIGGRAPH 2005 Papers on - SIGGRAPH '05. ACM Press, 2005. [Online]. Available: https://doi.org/10.1145/1186822.1073315 [48] J. W. Davis and H. Gao, “An expressive three-mode principal components model for gender recognition,” Journal of Vision, vol. 4, no. 5, pp. 2–2, May 2004. [Online]. Available: https://doi.org/10.1167/4.5.2 [49] A. Elgammal and C.-S. Lee, “Separating style and content on a nonlinear manifold,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, 2004. [Online]. Available: https://doi.org/10.1109/cvpr.2004.1315070 [50] J.-M. Chiou, Y.-T. Chen, and Y.-F. Yang, “Multivariate functional principal component analysis: A normalization approach,” Statistica Sinica, vol. 24, no. 4, pp. 1571–1596, 2014. [Online]. Available: http://www.jstor.org/stable/24310959 [51] H. Su, S. Liu, B. Zheng, X. Zhou, and K. Zheng, “A survey of trajectory distance measures and performance evaluation,” The VLDB Journal, vol. 29, no. 1, pp. 3–32, Jan 2020. [Online]. Available: https://doi.org/10.1007/s00778-019-00574-9 [52] Difference in matching between Euclidean and Dynamic Time Warping, Wikipedia. [Online]. Available: https://commons.wikimedia.org/wiki/File:Euclidean_vs_DTW.jpg [53] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978. [54] A. Stefan, V. Athitsos, and G. Das, “The move-split-merge metric for time series,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1425–1438, June 2013. [55] W. Zhao, Z. Xu, W. Li, and W. Wu, “Modeling and analyzing neural signals with phase variability using fisher-rao registration,” Journal of Neuroscience Methods, vol. 346, p. 108954, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165027020303770 [56] A. Srivastava, W. Wu, S. Kurtek, E. Klassen, and J. S. Marron, “Registration of functional data using fisher-rao metric,” 2011. [57] J. D. Tucker, W. Wu, and A. Srivastava, “Generative models for functional data using phase and amplitude separation,” pp. 50–66, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167947312004227 [58] H. Akima, “A new method of interpolation and smooth curve fitting based on local procedures,” J. ACM, vol. 17, no. 4, p. 589–602, Oct. 1970. [Online]. Available: https://doi.org/10.1145/321607.321609 [59] H. L. Shang, “A survey of functional principal component analysis,” AStA Advances in Statistical Analysis, vol. 98, no. 2, pp. 121–142, Apr 2014. [Online]. Available: https://doi.org/10.1007/s10182-013-0213-1 [60] Z. Wang, Y. Sun, and P. Li, “Functional principal components analysis of shanghai stock exchange 50 index,” Discrete Dynamics in Nature and Society, vol. 2014, p. 365204, Jul 2014. [Online]. Available: https://doi.org/10.1155/2014/365204 [61] A. Ohsato, Y. Sasaki, and H. Mizoguchi, “Real-time 6dof localization for a mobile robot using pre-computed 3d laser likelihood field,” in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, pp. 2359–2364. [62] IMPULSE X2 SYSTEM, PhaseSpace Motion Capture. [Online]. Available: https://www.phasespace.com/impulse-motion-capture.html [63] P. Merriaux, Y. Dupuis, R. Boutteau, P. Vasseur, and X. Savatier, “A study of vicon system positioning performance,” Sensors, vol. 17, no. 7, 2017. [Online]. Available: https://www.mdpi.com/1424-8220/17/7/1591 [64] P. Eichelberger, M. Ferraro, U. Minder, T. Denton, A. Blasimann, F. Krause, and H. Baur, “Analysis of accuracy in optical motion capture –a protocol for laboratory setup evaluation,” Journal of Biomechanics, vol. 49, no. 10, pp. 2085–2088, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0021929016305681 [65] Nexus 2.6 Documentation: Full body modeling with Plug in Gait, Vicon Motion Systems. [Online]. Available: https://docs.vicon.com/display/Nexus26/Full+body+modeling+with+Plug-in+Gait [66] J. George, M. Heller, and M. Kuzel, “Effect of shoe type on descending a curb,” Work, vol. 41, no. IEA 2012: 18th World congress on Ergonomics-Designing a sustainable future, p. 3333–3338, 2012. [Online]. Available: https://doi.org/10.3233/WOR-2012-0601-3333 [67] W.-L. HSU, Y.-J. CHEN, T.-W. LU, K.-H. HO, and J.-H. WANG, “Changes in interjoint coordination pattern in anterior cruciate ligament reconstructed knee during stair walking,” Journal of Biomechanical Science and Engineering, vol. 12, no. 2, pp. 16–00 694–16–00 694, 2017. [68] S. L. Delp, F. C. Anderson, A. S. Arnold, P. Loan, A. Habib, C. T. John, E. Guendelman, and D. G. Thelen, “Opensim: Open-source software to create and analyze dynamic simulations of movement,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 11, pp. 1940–1950, 2007. [Online]. Available: https://ieeexplore.ieee.org/document/4352056/ [69] A. Seth, J. L. Hicks, T. K. Uchida, A. Habib, C. L. Dembia, J. J. Dunne, C. F. Ong, M. S. DeMers, A. Rajagopal, M. Millard, S. R. Hamner, E. M. Arnold, J. R. Yong, S. K. Lakshmikanth, M. A. Sherman, J. P. Ku, and S. L. Delp, “Opensim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement,” PLOS Computational Biology, vol. 14, no. 7, p. e1006223, 2018. [Online]. Available: https://app.dimensions.ai/details/publication/pub.1105865798andhttps://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006223&type=printable [70] Musculoskeletal Models: Full Body Running Model, OpenSim Documentation. [Online]. Available: https://simtk-confluence.stanford.edu:8443/display/OpenSim/Full+Body+Running+Model [71] S. Rice, “Mathematical analysis of random noise,” Bell System Technical Journal, vol. 23, pp. 282–332, 1944. [72] L. Ye and E. Keogh, “Time series shapelets: A new primitive for data mining,” in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’09. New York, NY, USA: Association for Computing Machinery, 2009, p. 947–956. [Online]. Available: https://doi.org/10.1145/1557019.1557122 [73] G. M. James and T. J. Hastie, “Functional linear discriminant analysis for irregularly sampled curves,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 63, no. 3, pp. 533–550, 2001. [Online]. Available: https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/1467-9868.00297 [74] J. Park, J. Ahn, and Y. Jeon, “Sparse functional linear discriminant analysis,” Biometrika, 06 2021, asaa107. [Online]. Available: https://doi.org/10.1093/biomet/asaa107
|