|
[1]E. Croarkin, J. Danoff and C. Barnes, “Evidence-based rating of upper-extremity motor function tests used for people following a stroke.” Physical Therapy 84 (2008), 62-74. [2]R. Gajdosik and R. Bohannon, “Clinical measurement of range of motion. Review of goniometry emphasizing reliability and validity.” Phys Ther, 67 (1987), 1867-1872. [3]B. Michael, T. Cuong, M. Mohan, and B.Thomas. “Human pose estimation and cctivity recognition from multi-view videos: comparative explorations of recent developments.” IEEE Journal of Selected Topics in Signal Processing 6, no. 5 (September 2012): 538–52. doi:10.1109/JSTSP.2012.2196975. [4]E. Ceseracciu, S. Zimi, and C. Claudio. “Comparison of marker-less and marker-based motion capture technologies through simultaneous data collection during gait: proof of concept.” PLoS ONE 9, no. 3 (March 4, 2014): e87640. doi:10.1371/journal.pone.0087640. [5]G. Kurillo, Jay J. Han, Š. OBDRŽÁLEKa, P. Yan, R. Abresch, A. Nicorici, and R. Bajcsy. “Upper extremity reachable workspace evaluation with Kinect.” Studies in Health Technology and Informatics 184 (2012): 247–53. [6]J. Han, L. Shao, D. Xu, and J. Shotton. “Enhanced computer vision with Microsoft Kinect sensor: A review.” IEEE Transactions on Cybernetics 43, no. 5 (October 2013): 1318–34. doi:10.1109/TCYB.2013.2265378. [7]B. Bonnechère, B. Jansen, P. Salvia, H. Bouzahouene, L. Omelina, F. Moiseev, V. Sholukha, J. Cornelis, M. Rooze, and S. Van Sint Jan. “Validity and reliability of the Kinect within functional assessment activities: comparison with standard stereophotogrammetry.” Gait & Posture 39, no. 1 (January 2014): 593–98. doi:10.1016/j.gaitpost.2013.09.018. [8]M-C. Silaghi, P. Ralf, B. Ronan, F. Pascal, and T. Daniel. “Local and global skeleton fitting techniques for optical motion capture.” In Modelling and Motion Capture Techniques for Virtual Environments, 26–40. Springer, 1998. [9]S. Ralf, D-K. Catherine, S. Jiri, and R. Günter. “A marker-based measurement procedure for unconstrained wrist and elbow motions.” Journal of Biomechanics 32, no. 6 (1999): 615–21. [10]G. Varun, C. Plagemann, D. Koller, and S. Thrun. “Real time motion capture using a single time-of-flight camera.” In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, 755–62. IEEE, 2010. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5540141. [11]G. Juergen, C. Stoll, E. Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel. “Motion capture using joint skeleton tracking and surface estimation.” In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 1746–53. IEEE, 2009. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5206755. [12]T. Christian, M. Marcus A, S. Pascal, and S. Hans-Peter. “Combining 2D feature tracking and volume reconstruction for online video-based human motion capture.” International Journal of Image and Graphics 4, no. 04 (2004): 563–83. [13]M. Thomas B., H. Adrian, and V. Krüger. “A survey of advances in vision-based human motion capture and analysis.” Computer Vision and Image Understanding 104, no. 2–3 (November 2006): 90–126. doi:10.1016/j.cviu.2006.08.002. [14]I. Mikic, M. M. Trivedi, E. Hunter, and P. Cosman, “Human bodymodel acquisition and tracking using voxel data,” Int. J. Comput. Vision.,vol. 53, no. 3, pp. 199–223, 2003 [15]C. Stefano, M. Lars, G. Emiliano, F. Giancarlo, and A. Thomas P. “Markerless motion capture through visual hull, articulated ICP and subject specific model generation.” International Journal of Computer Vision 87, no. 1–2 (March 2010): 156–69. doi:10.1007/s11263-009-0284-3. [16]Z. Yu, W. Chen, and G. Guo. “Evaluating spatiotemporal interest point features for depth-based action recognition.” Image and Vision Computing 32, no. 8 (August 2014): 453–64. doi:10.1016/j.imavis.2014.04.005. [17]S. Jamie, T. Sharp, A. Kipman, A .Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore. “Real-time human pose recognition in parts from single depth images.” Communications of the ACM 56, no. 1 (2013): 116–24. [18]J. Ziegler, K. Nickel, and R. Stiefelhagen, “Tracking of the articulated upper body on multi-view stereo image sequences,” in Proc. Comput.Vis. Pattern Recognit., 2006 [19]T. Cuong, and T. Mohan M. “Extremity movement observation framework for upper body pose tracking in 3D,” 446–47. IEEE, 2009. doi:10.1109/ISM.2009.89. [20]Y-J. Chang, Y-H. Wen, and C-T. Yu. “A kinect-based upper limb rehabilitation system to assist people with cerebral palsy.” Research in Developmental Disabilities 34, no. 11 (November 2013): 3654–59. doi:10.1016/j.ridd.2013.08.021. [21]Š. Obdržálek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison and M. Pavel, “Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population”, in Proceedings of EMBC, 34th International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, 2012. [22]Fukunaga, Keinosuke, and Larry Hostetler. "The estimation of the gradient of a density function, with applications in pattern recognition." Information Theory, IEEE Transactions on 21.1 (1975): 32-40. [23]Y. Cheng. “Mean shift, mode seeking, and clustering.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 17, no. 8 (1995): 790–99. [24]X. Ning, and G. Guo. “Assessing spinal loading using the Kinect depth sensor: A feasibility study.” IEEE Sensors Journal 13, no. 4 (April 2013): 1139–40. doi:10.1109/JSEN.2012.2230252.
|