跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.175) 您好!臺灣時間:2024/12/09 21:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳宥廷
研究生(外文):Yu-Ting Chen
論文名稱:基於距離之時間序列分析與模板匹配應用於人體下肢動作識別
論文名稱(外文):Lower Body Action Recognition Using Distance-Based Time Series Analysis and Template Matching
指導教授:詹魁元
指導教授(外文):Kuei-Yuan Chan
口試委員:顏家鈺徐瑋勵
口試委員(外文):Jia-Yush YenWei-Li Hsu
口試日期:2021-10-21
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:機械工程學研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:89
中文關鍵詞:人體下肢運動動作識別相似度量測時間序列模板匹配動態時間扭曲Move-Split-Merge費雪拉奧度量時間歸整多變量泛函主成分分析人體運動生成
外文關鍵詞:Human Lower Limb MotionAction RecognitionSimilarity MeasurementTime SeriesTemplate MatchingDynamic Time WarpingMove-Split-MergeFisher-Rao MetricTemporal AlignmentMultivariate Functional Principal Component AnalysisHuman Motion Generation
DOI:10.6342/NTU202103677
相關次數:
  • 被引用被引用:2
  • 點閱點閱:233
  • 評分評分:
  • 下載下載:48
  • 收藏至我的研究室書目清單書目收藏:0
人體動作識別可以應用於復健、長照、監測、娛樂與人機互動等多個領域,其資料大多是以時間序列的方式呈現,根據資料來源可分為基於影像和基於穿戴式感測器,在大數據與人工智慧領域中,是熱門的研究主題。本研究希望以一種直觀的方式,不依賴神經網路或機器學習方法,透過分析人體運動的關節角度變化曲線,來了解人體如何動作。本研究關注於相似度量測與時間歸整(Temperal Alignment)處理,使用時間序列的距離量度,包含歐氏距離、動態時間扭曲、Move-Split-Merge 和費雪拉奧度量,並使用多變量泛函主成分分析來分析動作曲線。全文可以分成三個部分。
第一部分是動作資料的蒐集,透過 Vicon 動作捕捉系統蒐集運動資料,在 OpenSim 中建立人體模型,計算下肢六個關節在屈曲-伸展軸的旋轉角度,以這六個角度的時間序列來描述人體下肢動作。第二部分是關於動作模板的建立與分析,本研究包含 10 個常見的下肢動作,利用時間歸整方法,依據不同的距離量度來調整動作樣本,時間歸整的目的在降低樣本的時間偏移和速度變異,對齊樣本中的主要輪廓。接著以歸整後的樣本建立動作模板,分析不同樣本類別的距離分布。第三部分是模板匹配的試驗,本研究提出一個相似度評分方法,基於時間序列的距離量測,結合 softmax 函數與鐘型函數,將動作進行分類並同時能有效去除離群值。本研究設計了 4 組動作情境做為測試,其中包含由人工生成的動作情境,透過主成分分析和時間扭曲方法可以隨機生成動作樣本。結果顯示,本研究提出的相似度評分方法是可行的,並以動態時間扭曲(DTW)的效果最佳,即使在包含雜訊的情境中,也能維持表現。
Human action recognition (HAR) is an important trend in data science and AI community, and HAR can be utilized in rehabilitation, long-term care, surveillance, entertainment, human-machine interface, etc. Time series is usually used for human motion data, and it can be based on computer vision or wearable sensors. Instead of using neural networks or machine learning technique, this thesis attempts to find a simple method to analyze the time series data for understanding how human act. This thesis focuses on similarity estimation and temporal alignment, by using four kinds of time series distance measurement, including Eucildean distance(ED), dynamic time warping(DTW), Move-Split-Merge(MSM) and Fisher-Rao metric(FRM), and uses multivariate funcitonal principal component analysis(mFPCA)to analyze motion data. The whole study is divided into three parts.
In the first part, motion samples is collected by Vicon, a optical motion capture system, and a skeleton model is built in OpenSim to calculate the angles of lower limb joints. In this thesis, motion data are described as time series of joint angles. In the second part, we use the temporal alignment method to adjust the samples, in order to reduce the variation of time and velocity. Then ten classes and templates of lower body actions are defined from aligned samples. We analyze the distance distribution of samples for all action classes. In the third part, based on the distance measurement of time series, this paper provides a similarity scoring method which consists of softmax function and bell-shaped function. By this method, we can classify human actions and discriminate outliers. There are four action scenarios provided to demonstrate template matching process and the use of similarity scoring method. One of the scenarios is generated by mFPCA and time warping function. By mFPCA, we can randomly generate motion samples. In conclusion, the results show that our method is feasible to classify human actions, and DTW is the best choice for similarity measurement, even in the scenario with noise.
口試委員會審定書 i
誌謝 ii
摘要 iv
Abstract v
目錄 vii
圖目錄 xi
表目錄 xv
第一章 緒論 1
1.1 前言 1
1.2 人體動作變異 2
1.3 研究目的 3
1.4 論文架構 4
第二章 文獻回顧 6
2.1 人體動作識別 6
2.1.1 資料量測與形式 7
2.1.2 特徵提取 8
2.1.3 學習與分類方法 9
2.1.4 主要的挑戰 10
2.2 時間序列分析 10
2.2.1 距離與相似度 11
2.2.2 時間歸整 11
2.2.3 離群偵測 12
2.3 人體動作分析 13
2.3.1 動作風格 13
2.3.2 階層式人體模型 14
2.3.3 運動合成 14
第三章 研究方法 15
3.1 模板匹配 16
3.1.1 基於時間序列的模板匹配 16
3.1.2 相似度評分 17
3.2 距離量度 19
3.2.1 歐幾里得距離 19
3.2.2 動態時間歸整 21
3.2.3 Move-Split-Merge 22
3.2.4 費雪拉奧度量 23
3.3 時間歸整 26
3.3.1 歸整路徑 27
3.3.2 時間扭曲函數 28
3.3.3 Elastic Shape Analysis 29
3.4 泛函主成分分析 30
第四章 實驗與數據處理 33
4.1 實驗架設 34
4.1.1 動作捕捉系統 34
4.1.2 動作實驗 35
4.2 人體運動分析 38
4.2.1 人體骨骼模型 38
4.2.2 逆向運動學 39
4.2.3 動作樣本 40
4.2.4 誤差討論 43
第五章 模板建立與分析 44
5.1 樣本歸整 45
5.2 樣本距離分布 54
5.2.1 小結 58
第六章 情境分析 60
6.1 動作生成 61
6.1.1 mFPCA 樣本生成 61
6.1.2 時間扭曲 61
6.2 動作情境 62
6.3 模板匹配結果 63
6.3.1 小結 68
第七章 結論與未來展望 69
7.1 結論與貢獻 69
7.2 未來研究建議 71
參考文獻 72
附錄 A OpenSim 參數 83
附錄 B 動作模板主成分 85
[1] M. B. Holte, C. Tran, M. M. Trivedi, and T. B. Moeslund, “Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments,” IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 5, pp. 538–552, 2012.
[2] W. Ma, S. Xia, J. K. Hodgins, X. Yang, C. Li, and Z. Wang, “Modeling style and variation in human motion,” in Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ser. SCA ’10. Goslar, DEU: Eurographics Association, 2010, p. 21–30.
[3] S. N. Muralikrishna, B. Muniyal, U. D. Acharya, and R. Holla, “Enhanced human action recognition using fusion of skeletal joint dynamics and structural features,” Journal of Robotics, vol. 2020, p. 3096858, Aug 2020. [Online]. Available: https://doi.org/10.1155/2020/3096858
[4] S. Blair, M. J. Lake, R. Ding, and T. Sterzing, “Magnitude and variability of gait characteristics when walking on an irregular surface at different speeds,” Human Movement Science, vol. 59, pp. 112–120, jun 2018. [Online]. Available: https://doi.org/10.1016/j.humov.2018.04.003
[5] C. Xu, Y. Makihara, G. Ogi, X. Li, Y. Yagi, and J. Lu, “The ou-isir gait database comprising the large population dataset with age and performance evaluation of age estimation,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, p. 24, Dec 2017.
[6] W. Wei and A. Yunxiao, “Vision-based human motion recognition: A survey,” in 2009 Second International Conference on Intelligent Networks and Intelligent Systems, 2009, pp. 386–389.
[7] D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, no. 2, pp. 224–241, 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002171
[8] D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, “Vision-based human activity recognition: a survey,” Multimedia Tools and Applications, vol. 79, no. 41, pp. 30 509–30 555, Nov 2020. [Online]. Available: https://doi.org/10.1007/s11042-020-09004-3
[9] L. M. Dang, K. Min, H. Wang, M. Piran, H. Lee, and H. Moon, “Sensor-based and vision-based human activity recognition: A comprehensive survey,” Pattern Recognition, vol. 108, 07 2020.
[10] P.-z. Chen, J. Li, M. Luo, and N.-h. Zhu, “Real-time human motion capture driven by a wireless sensor network,” Int. J. Comput. Games Technol., vol. 2015, 2015 [Online]. Available: https://doi.org/10.1155/2015/695874
[11] S. Liu, J. Zhang, Y. Zhang, and R. Zhu, “A wearable motion capture device able to detect dynamic motion of human limbs,” Nature Communications, vol. 11, no. 1, Nov. 2020. [Online]. Available: https://doi.org/10.1038/s41467-020-19424-2
[12] A. D. Young, “Use of body model constraints to improve accuracy of inertial motion capture,” in 2010 International Conference on Body Sensor Networks, 2010, pp.180–186.
[13] H. Wang and C. Schmid, “Action recognition with improved trajectories,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3551–3558.
[14] S. Herath, M. T. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” CoRR, vol. abs/1605.04988, 2016. [Online]. Available: http://arxiv.org/abs/1605.04988
[15] A. Bobick and J. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, 2001.
[16] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, vol. 2, 2005, pp. 1395–1402 Vol. 2.
[17] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” CoRR, vol. abs/1801.07455, 2018. [Online]. Available: http://arxiv.org/abs/1801.07455
[18] O. Taheri, H. Salarieh, and A. Alasti, “Human leg motion tracking by fusing imus and rgb camera data using extended kalman filter,” CoRR, vol. abs/2011.00574, 2020. [Online]. Available: https://arxiv.org/abs/2011.00574
[19] T. Ito, K. Ayusawa, E. Yoshida, and H. Kobayashi, “Evaluation of active wearable assistive devices with human posture reproduction using a humanoid robot,” Advanced Robotics, vol. 32, no. 12, pp. 635–645, 2018. [Online]. Available: https://doi.org/10.1080/01691864.2018.1490200
[20] E. Papi, Y. N. Bo, and A. H. McGregor, “A flexible wearable sensor for knee flexion assessment during gait,” Gait and Posture, vol. 62, pp. 480–483, 2018.
[21] L. Fan, Z. Wang, and H. Wang, “Human activity recognition model based on decision tree,” in 2013 International Conference on Advanced Cloud and Big Data, 2013, pp. 64–68.
[22] A. Glandon, L. Vidyaratne, N. Sadeghzadehyazdi, N. K. Dhar, J. O. Familoni, S. T. Acton, and K. M. Iftekharuddin, “3d skeleton estimation and human identity recognition using lidar full motion video,” in 2019 International Joint Conference on Neural Networks (IJCNN), 2019, pp. 1–8.
[23] J. Zhao, J. Zhou, Y. Yao, D.-a. Li, and L. Gao, “Rf-motion: A devicefree rf-based human motion recognition system,” Wireless Communications and Mobile Computing, vol. 2021, p. 1497503, Mar 2021. [Online]. Available: https://doi.org/10.1155/2021/1497503
[24] G. Hu, B. Cui, and S. Yu, “Joint learning in the spatio-temporal and frequency domains for skeleton-based action recognition,” IEEE Transactions on Multimedia, vol. 22, no. 9, pp. 2207–2220, Sep. 2020.
[25] D. Weinland, E. Boyer, and R. Ronfard, “Action recognition from arbitrary views using 3d exemplars,” in 2007 IEEE 11th International Conference on Computer Vision, 2007, pp. 1–7.
[26] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 24–38, 2014, visual Understanding and Applications with RGB-D Cameras. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320313000680
[27] C. Ott, D. Lee, and Y. Nakamura, “Motion capture based human motion recognition and imitation by direct marker control,” in Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots, 2008, pp. 399–405.
[28] J. P. Vox and F. Wallhoff, “Preprocessing and normalization of 3d-skeleton-data for human motion recognition,” in 2018 IEEE Life Sciences Conference (LSC), 2018, pp. 279–282.
[29] Q. Zhang, Y. Yao, D. Zhou, and R. Liu, “Motion key-frame extraction by using optimized t-stochastic neighbor embedding,” Symmetry, vol. 7, no. 2, pp. 395–411, 2015. [Online]. Available: https://www.mdpi.com/2073-8994/7/2/395
[30] A. Richard and J. Gall, “Temporal action detection using a statistical language model,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3131–3140, 2016.
[31] G. Evangelidis, G. Singh, and R. Horaud, “Skeletal quads: Human action recognition using joint quadruples,” in 2014 22nd International Conference on Pattern Recognition, 2014, pp. 4513–4518.
[32] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context and appearance distribution features,” in CVPR 2011, 2011, pp. 489–496.
[33] J. Javed, H. Yasin, and S. F. Ali, “Human movement recognition using euclidean distance: A tricky approach,” in 2010 3rd International Congress on Image and Signal Processing, vol. 1, 2010, Conference Proceedings, pp. 317–321.
[34] E. Ohn-Bar and M. M. Trivedi, “Joint angles similarities and hog2 for action recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2013.
[35] S. Sempena, Nur Ulfa Maulidevi, and Peb Ruswono Aryan, “Human action recognition using dynamic time warping,” in Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, 2011, pp. 1–5.
[36] Y.-H. Chou, H.-C. Cheng, C.-H. Cheng, K.-H. Su, and C.-Y. Yang, “Dynamic time warping for imu based activity detection,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016, pp. 003 107–003 112.
[37] L. Brun, P. Foggia, A. Saggese, and M. Vento, “Recognition of human actions using edit distance on aclet strings,” in VISAPP, 2015.
[38] F. Zhou and F. Torre, “Canonical time warping for alignment of human behavior,” in Advances in Neural Information Processing Systems, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, Eds., vol. 22. Curran Associates, Inc., 2009. [Online]. Available: https://proceedings.neurips.cc/paper/2009/file/2ca65f58e35d9ad45bf7f3ae5cfd08f1-Paper.pdf
[39] Q. Xiao and S. Liu, “Motion retrieval based on dynamic bayesian network and canonical time warping,” in 2015 7th International Conference on Intelligent HumanMachine Systems and Cybernetics, vol. 2, 2015, pp. 182–185.
[40] C. Yuan, W. Hu, X. Li, S. Maybank, and G. Luo, “Human action recognition under log-euclidean riemannian metric,” in Computer Vision – ACCV 2009, H. Zha, R.-i. Taniguchi, and S. Maybank, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 343–353.
[41] K. Yang and C. Shahabi, “A pca-based similarity measure for multivariate time series,” in Proceedings of the 2nd ACM International Workshop on Multimedia Databases, ser. MMDB ’04. New York, NY, USA: Association for Computing Machinery, 2004, p. 65–74. [Online]. Available: https://doi.org/10.1145/1032604.1032616
[42] M. F. Abdelkader, W. Abd-Almageed, A. Srivastava, and R. Chellappa, “Silhouette-based gesture and action recognition via modeling trajectories on riemannian shape manifolds,” Computer Vision and Image Understanding, vol. 115, no. 3, pp. 439–455, 2011, special issue on Feature-Oriented Image and Video Computing for Extracting Contexts and Semantics. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002377
[43] F. Zhou, F. De la Torre, and J. K. Hodgins, “Hierarchical aligned cluster analysis for temporal clustering of human motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 582–596, 2013.
[44] S. H. Joshi, J. Su, Z. Zhang, and B. Ben Amor, Elastic Shape Analysis of Functions, Curves and Trajectories. Cham: Springer International Publishing, 2016, pp. 211–231. [Online]. Available: https://doi.org/10.1007/978-3-319-22957-7_10
[45] F. Zhou and F. De la Torre, “Generalized canonical time warping,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 279-294, 2016.
[46] H. C. Mandhare and S. R. Idate, “A comparative study of cluster based outlier detection, distance based outlier detection and density based outlier detection techniques,” in 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), 2017, pp. 931–935.
[47] E. Hsu, K. Pulli, and J. Popović, “Style translation for human motion,” in ACM SIGGRAPH 2005 Papers on - SIGGRAPH '05. ACM Press, 2005. [Online]. Available: https://doi.org/10.1145/1186822.1073315
[48] J. W. Davis and H. Gao, “An expressive three-mode principal components model for gender recognition,” Journal of Vision, vol. 4, no. 5, pp. 2–2, May 2004. [Online]. Available: https://doi.org/10.1167/4.5.2
[49] A. Elgammal and C.-S. Lee, “Separating style and content on a nonlinear manifold,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, 2004. [Online]. Available: https://doi.org/10.1109/cvpr.2004.1315070
[50] J.-M. Chiou, Y.-T. Chen, and Y.-F. Yang, “Multivariate functional principal component analysis: A normalization approach,” Statistica Sinica, vol. 24, no. 4, pp. 1571–1596, 2014. [Online]. Available: http://www.jstor.org/stable/24310959
[51] H. Su, S. Liu, B. Zheng, X. Zhou, and K. Zheng, “A survey of trajectory distance measures and performance evaluation,” The VLDB Journal, vol. 29, no. 1, pp. 3–32, Jan 2020. [Online]. Available: https://doi.org/10.1007/s00778-019-00574-9
[52] Difference in matching between Euclidean and Dynamic Time Warping, Wikipedia. [Online]. Available: https://commons.wikimedia.org/wiki/File:Euclidean_vs_DTW.jpg
[53] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978.
[54] A. Stefan, V. Athitsos, and G. Das, “The move-split-merge metric for time series,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1425–1438, June 2013.
[55] W. Zhao, Z. Xu, W. Li, and W. Wu, “Modeling and analyzing neural signals with phase variability using fisher-rao registration,” Journal of Neuroscience Methods, vol. 346, p. 108954, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165027020303770
[56] A. Srivastava, W. Wu, S. Kurtek, E. Klassen, and J. S. Marron, “Registration of functional data using fisher-rao metric,” 2011.
[57] J. D. Tucker, W. Wu, and A. Srivastava, “Generative models for functional data using phase and amplitude separation,” pp. 50–66, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167947312004227
[58] H. Akima, “A new method of interpolation and smooth curve fitting based on local procedures,” J. ACM, vol. 17, no. 4, p. 589–602, Oct. 1970. [Online]. Available: https://doi.org/10.1145/321607.321609
[59] H. L. Shang, “A survey of functional principal component analysis,” AStA Advances in Statistical Analysis, vol. 98, no. 2, pp. 121–142, Apr 2014. [Online]. Available: https://doi.org/10.1007/s10182-013-0213-1
[60] Z. Wang, Y. Sun, and P. Li, “Functional principal components analysis of shanghai stock exchange 50 index,” Discrete Dynamics in Nature and Society, vol. 2014, p. 365204, Jul 2014. [Online]. Available: https://doi.org/10.1155/2014/365204
[61] A. Ohsato, Y. Sasaki, and H. Mizoguchi, “Real-time 6dof localization for a mobile robot using pre-computed 3d laser likelihood field,” in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, pp. 2359–2364.
[62] IMPULSE X2 SYSTEM, PhaseSpace Motion Capture. [Online]. Available: https://www.phasespace.com/impulse-motion-capture.html
[63] P. Merriaux, Y. Dupuis, R. Boutteau, P. Vasseur, and X. Savatier, “A study of vicon system positioning performance,” Sensors, vol. 17, no. 7, 2017. [Online]. Available: https://www.mdpi.com/1424-8220/17/7/1591
[64] P. Eichelberger, M. Ferraro, U. Minder, T. Denton, A. Blasimann, F. Krause, and H. Baur, “Analysis of accuracy in optical motion capture –a protocol for laboratory setup evaluation,” Journal of Biomechanics, vol. 49, no. 10, pp. 2085–2088, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0021929016305681
[65] Nexus 2.6 Documentation: Full body modeling with Plug in Gait, Vicon Motion Systems. [Online]. Available: https://docs.vicon.com/display/Nexus26/Full+body+modeling+with+Plug-in+Gait
[66] J. George, M. Heller, and M. Kuzel, “Effect of shoe type on descending a curb,” Work, vol. 41, no. IEA 2012: 18th World congress on Ergonomics-Designing a sustainable future, p. 3333–3338, 2012. [Online]. Available: https://doi.org/10.3233/WOR-2012-0601-3333
[67] W.-L. HSU, Y.-J. CHEN, T.-W. LU, K.-H. HO, and J.-H. WANG, “Changes in interjoint coordination pattern in anterior cruciate ligament reconstructed knee during stair walking,” Journal of Biomechanical Science and Engineering, vol. 12, no. 2, pp. 16–00 694–16–00 694, 2017.
[68] S. L. Delp, F. C. Anderson, A. S. Arnold, P. Loan, A. Habib, C. T. John, E. Guendelman, and D. G. Thelen, “Opensim: Open-source software to create and analyze dynamic simulations of movement,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 11, pp. 1940–1950, 2007. [Online]. Available: https://ieeexplore.ieee.org/document/4352056/
[69] A. Seth, J. L. Hicks, T. K. Uchida, A. Habib, C. L. Dembia, J. J. Dunne, C. F. Ong, M. S. DeMers, A. Rajagopal, M. Millard, S. R. Hamner, E. M. Arnold, J. R. Yong, S. K. Lakshmikanth, M. A. Sherman, J. P. Ku, and S. L. Delp, “Opensim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement,” PLOS Computational Biology, vol. 14, no. 7, p. e1006223, 2018. [Online]. Available: https://app.dimensions.ai/details/publication/pub.1105865798andhttps://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006223&type=printable
[70] Musculoskeletal Models: Full Body Running Model, OpenSim Documentation. [Online]. Available: https://simtk-confluence.stanford.edu:8443/display/OpenSim/Full+Body+Running+Model
[71] S. Rice, “Mathematical analysis of random noise,” Bell System Technical Journal, vol. 23, pp. 282–332, 1944.
[72] L. Ye and E. Keogh, “Time series shapelets: A new primitive for data mining,” in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’09. New York, NY, USA: Association for Computing Machinery, 2009, p. 947–956. [Online]. Available: https://doi.org/10.1145/1557019.1557122
[73] G. M. James and T. J. Hastie, “Functional linear discriminant analysis for irregularly sampled curves,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 63, no. 3, pp. 533–550, 2001. [Online]. Available: https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/1467-9868.00297
[74] J. Park, J. Ahn, and Y. Jeon, “Sparse functional linear discriminant analysis,” Biometrika, 06 2021, asaa107. [Online]. Available: https://doi.org/10.1093/biomet/asaa107
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊