|
[1]J. Stenum, M. K. Cherry Allen, C. O. Pyles, R. D. Reetzke, M. F. Vignos, and R. T. Roemmich, “Applications of pose estimation in human health and performance across the lifespan,” Sensors, no. 21, pp. 1-20, 2021. [2]K. Ludwig, S. Scherer, M. Einfalt, and R. Linenhart, “Self-supervised learning for human pose estimation in sports,” in Proc. IEEE International Conference on Multimedia & Expo Workshops, 2021, pp. 1-6. [3]F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in Proc. International Conference on Learning Representations, 2016, pp. 1-13. [4]S. Johnson and M. Everingham, “Clustered pose and nonlinear appearance models for human pose estimation” in Proc. 21st British Machine Vision Conference, 2010, pp. 12.1-12.11. [5]M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2D human pose estimation: New benchmark and state of the art analysis,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3686-3693. [6]M. Andriluka, S. Roth, and B. Schiele, “Pictorial structures revisited: people detection and articulated pose estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1014-1021. [7]Y. Yang and D. Ramanan, “Articulated pose estimation with flexible mixtures-of-parts” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1385-1392. [8]L. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele, “Poselet conditioned pictorial structures,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 588-595. [9]M. Dantone, J. Gall, C. Leistner, and L. Cool, “Human pose estimation using body parts dependent joint regressors,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3041-3048. [10]F. Achilles, A. Ichim, H. Coskun, F. Tombari, F. Noachtar, and N. Navab, “Patient MoCap: human pose estimation under blanket occlusion for hospital monitoring applications,” in Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, 2016, pp. 491-499. [11]T. Nageli, S. Oberholzer, S. Pluss, J. Alonso, and O. Hilliges, “Flycon: Real-time environment-independent multi-view human pose estimation with aerial vehicles,” ACM Transactions on Graphics, vol. 37, pp. 1-14, 2018. [12]A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1653-1660. [13]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, pp. 1-9, 2012. [14]J. Tomposn, A. Jain, Y. LeCun, and C. Bregler, “Joint training of a convolutional network and a graphical model for human pose estimation,” in Proc. The 27th International Conference on Neural Advances Information Processing Systems, 2014, vol. 1, pp. 1799-1807 [15]X. Fan, K. Zheng, Y. Lin, and S. Wang, “Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1347-1355. [16]S. E. Wei, V. Ramakishna, T. Kanade, and Y. Sheikh, “Convolutional pose machines,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4724-4732. [17]A. Bulat and G. Tzimiropoulos, “Human pose estimation via convolutional part heatmap regression,” in Proc. European Conference on Computer Vision, 2016, pp. 717-732. [18]N. Zhang, E. Shelhamer, Y. Gao, and T. Darrell, “Fine-grained pose prediction, normalization, and recognition,” arXiv:1511.07063v1, 2015, pp. 1-8. [19]I. Lifshitz, E. Fetaya, and S. Ullman, “Human pose estimation using deep consensus voting,” in Proc. European Conference on Computer Vision, 2016, pp. 246-260. [20]Y. Chan, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, “Cascaded pyramid network for multi-person pose estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7103-7112. [21]A. Martinez, M. Villamizar, O. Canevet, and M. J. Odobez, “Real-time convolutional networks for depth-based human pose estimation,” in Proc. IEEE Conference on Intelligent Robots and Systems, 2018, pp. 41-47. [22]D. Luo, S. Du, and T. Ikenaga, “End-to-end feature pyramid network for real-time multi-person pose estimation,” in Proc. IEEE Conference on Machine Vision Applications, 2019, pp. 1-4. [23]S. Jin, L. Xu, J. Xu, C. Wang, W. Lin, C. Qine, and P. Luo, “Whole-body human pose estimation in the wild,” in Proc. European Conference on Computer Vision, 2020, pp. 196-214. [24]S. Liang, G. Chu, C. Xin, and J. Wang, “Joint relation based human pose estimation,” The Visual Computer, vol.38, no. 4, pp. 1369-1381, 2022. [25]A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in Proc. European Conference on Computer Vision, 2016, pp. 483-499. [26]X. Chu, W. Yang, W. Ouyang, C. Ma, A. L. Yuille, and X. Wang, “Multi-context attention for human pose estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1831-1840. [27]W. Yang, S. Li, W. Ouyang, H. Li, and X. Wang, “Learning feature pyramids for human pose estimation,” in Proc. IEEE International Conference on Computer Vision, 2017, pp. 1281-1290. [28]L. Ke, M. C. Chang, H. Qi, and S. Lyu, “Multi-scale structure-aware network for human pose estimation,” in Proc. European Conference on Computer Vision, 2018, pp. 713-728. [29]Z. Cao, R. Wang, X. Wang, Z. Liu, and X. Zhu, “Improving human pose estimation with self-attention generative adversarial networks,” in Proc. IEEE International Conference on Multimedia & Expo Workshops, 2019, pp. 567-572. [30]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde, S. Ozair, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, pp. 1-9, 2014. [31]S. T. Kim and H. J. Lee, “Lightweight stacked hourglass network for human pose estimation,” Applied Sciences, 2020, 10(18):6497. [32]G. Ning, Z. Zhang, and Z. He, “Knowledge-guided deep fractal neural networks for human pose estimation,” IEEE Transactions on Multimedia, vol. 20, no. 5, pp. 1246-1259, 2017. [33]P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding convolution for semantic segmentation,” in Proc. IEEE Winter Conference on Applications of Computer Vision, 2018, pp. 1451-1460. [34]V. Belagiannis and A. Zisserman, “Recurrent human pose estimation,” in Proc. IEEE International Conference on Automatic Face & Gesture Recognition, 2017, pp. 468-475. [35]L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, V. Gehler, and B. Schiele, “Deepcut: Joint subset partition and labeling for multi person pose estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4929-4937. [36]E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele, “Deepercut: A deeper, stronger, and faster multi-person pose estimation model,” in Proc. European Conference on Computer Vision, 2016, pp. 34-50. [37]W. Tang, P. Yu, and Y. Wu, “Deeply learned compositional models for human pose estimation,” in Proc. European Conference on Computer Vision, 2018, pp. 190-206. [38]Z. Huo, H. Jin, Y. Qiao, and F. Luo, “Deep high-resolution network with double attention residual blocks for human pose estimation,” IEEE Access, vol. 8, pp. 224947-224957, 2020.
|