跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/03 10:32
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:王妤霈
研究生(外文):Yu Pei Wang
論文名稱:Google Mediapipe之醫學應用:以自動化著地錯誤評分系統與中風上肢動作評估系統為例
論文名稱(外文):Medical Applications of Google Mediapipe in Automated Landing Error Scoring System and Stroke Upper Extremity Motor Assessment System
指導教授:趙一平趙一平引用關係
指導教授(外文):Y. P. Chao
口試委員:楊婕淩趙一平吳致寬
口試委員(外文):C. L. YangY. P. ChaoC. K. Wu
口試日期:2024-06-13
學位類別:碩士
校院名稱:長庚大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:157
中文關鍵詞:人體姿態估計Google Mediapipe著地錯誤評分系統機器學習ArmCAMYOLO
外文關鍵詞:Human Posture EstimationGoogle MediapipeLanding Error Scoring SystemMachine LearningArmCAMYOLO
相關次數:
  • 被引用被引用:0
  • 點閱點閱:7
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著科技進步,姿態估計技術日漸成熟,這項技術不僅在學術研究中受到重視,更在各種實際應用場景中展現其價值,如運動、醫學、影視產業等等。姿態估計能夠被運用於多種不同的影像,如動作捕捉攝影機、深度攝影機、以及較為普及的RGB攝影機所拍攝的數據。本研究目標主要針對RGB攝影機所擷取的二維影片,並結合效能較高且準確率佳的Google Mediapipe進行動作分析,利用其即時處理影像數據的優勢,以提供準確的姿勢估計結果,且應用在運動醫學與復健醫學領域,建置自動化運算流程來取代原本需要人為評估的項目。
在運動醫學領域,本研究將其應用在著地錯誤評分系統上。著地錯誤評分系統(Landing Error Score System,LESS)為一種用來檢測運動員錯誤運動模式的方法,可作為預警並避免運動員因錯誤運動方式所造成的前十字韌帶損傷。然而,過往主要以人為方式評估,相當耗時及耗力,難以運用在大規模的檢測上。因此本研究運用電腦視覺與動作分析將LESS的評估過程自動化,透過演算法偵測出側面與正面拍攝影片的兩個關鍵幀-「跳下時腳剛觸碰到地板的瞬間(Initial Contact)」與「準備跳起時膝蓋彎曲最大角度的瞬間(Maximal Knee Flexion)」,再利用基於規則與機器學習的方法分別對每一個項目進行評分。
研究成果顯示,將人為標記視為標準時,使用正面拍攝影像及側面拍攝影像計算Initial Contact關鍵幀之平均誤差分別為14.1毫秒及11.5毫秒,而考量以目測方式無法準確識別角度,因此Maximal Knee Flexion則直接以Mediapipe計算之答案為標準答案。在單項評分部分,手動選擇關鍵幀結果中,配合3次試跳計算一次成績下,17個項目中每一項的均方根誤差皆小於 0.54分、平均絕對誤差皆小於0.30分、準確率皆高於70%;自動選擇關鍵幀單項評分結果中,3次試跳算一次成績下,所有項目均方根誤差皆小於0.53分、平均絕對誤差皆小於0.29分、準確率皆高於71%。在總體分數部分,手動選擇關鍵幀的結果為3次試跳算一次成績的平均絕對誤差1.03、相關係數為0.65、敏感性為0.85、特異性為0.69;而自動選擇關鍵幀結果為3次試跳算一次成績的平均絕對誤差1.14分、相關係數0.54、敏感性為82%、特異性為61%。上述結果顯示選擇關鍵幀的準確度會影響最後總分,但整體而言還是能為醫生提供客觀且可靠的評分結果。
在復健醫學領域,本研究則開發了一套自動化上肢動作復健評估系統,除了解決目前一對一復健評估較為主觀的評分方式,也可讓住在偏鄉地區患者能居家進行評估,節省時間與人力,並可提供客觀且可靠的評估結果。本研究使用的中風後上肢動作復健評估工具為Arm Capacity and Movement test(ArmCAM),量表中共有10項評估動作,其具有可靠的信度與效度,且原先設計就是以人為觀看視訊方式進行評估,因此具有自動化的可行性。故本研究使用Mediapipe判斷使用者的動作,更結合了Yolo物件偵測輔助判斷較細微的關節點與使用者動作完成的狀態以進行評分。系統建置完成後於醫院招募22名受試者進行測試,結果顯示所有項目皆與治療師評分結果相近(kappa>0.44),總分相關係數達到0.99,顯示本研究針對ArmCAM中各個評分項目,利用不同的骨架特徵及物件偵測的技術組合,成功將10項動作自動化,並達成系統的目標。
本研究成功將Mediapipe應用於著地錯誤評分系統與中風後上肢動作復健評估,使原先需耗時及耗費人力的評估過程自動化,並且為醫生以及治療師提供一個客觀且可靠的評估結果。
With the advancement of technology, pose estimation is becoming more and more mature, and this technology is not only emphasized in academic research, but also shows its value in various real-world application scenarios, such as sports, medicine, and the video industry. Pose estimation can be applied to a variety of images, such as data captured by motion capture cameras, depth cameras, and the more popular RGB cameras. The goal of this study is to focus on the 2D video captured by RGB cameras and combine it with Google Mediapipe, which has high performance and accuracy, for motion analysis, to take advantage of its real-time processing of image data to provide accurate pose estimation results, and to apply it to the fields of sports medicine and rehabilitation medicine, to build an automated computational process to replace the items that need to be evaluated by human beings.
In the field of sports medicine, this study applies it to the landing error scoring system(LESS). The Landing Error Score System is a method for detecting erroneous movement patterns in athletes, which can be used to predict and prevent Anterior Cruciate Ligament(ACL) injuries caused by incorrect movement patterns. However, the LESS is a time-consuming and labor-intensive method, which is difficult to be applied to large-scale testing. Therefore, in this study, computer vision and motion analysis were used to automate the LESS assessment process by detecting two key frames in the side and front view of the video through algorithms: the “Initial Contact” and the “Maximal Knee Flexion”, and each item is scored separately using a rule-based and machine learning approach.
Our research results indicate that when considering manual annotation as the ground truth, the average error in calculating the Initial Contact key frame using frontal view images and side view images is 14.1 milliseconds and 11.5 milliseconds, respectively. Given that visual estimation cannot accurately identify angles, the Maximal Knee Flexion is directly taken from the results calculated by Mediapipe as the ground truth. In the individual scoring, for manually selected key frames, with three jumps counted as one result, the root mean square error (RMSE) for each of the 17 items is less than 0.54 points, the mean absolute error (MAE) is less than 0.30 points, and the accuracy is above 70%. For automatically selected key frames, with three jumps counted as one result, the RMSE for all items is less than 0.53 points, the MAE is less than 0.29 points, and the accuracy exceeds 71%. In the overall score, the results for manually selected key frames, with three jumps counted as one result, show a mean absolute error of 1.03 points, a correlation coefficient of 0.65, a sensitivity of 0.85, and a specificity of 0.69. For automatically selected key frames, the corresponding results are a mean absolute error of 1.14 points, a correlation coefficient of 0.54, a sensitivity of 82%, and a specificity of 61%. These results indicate that the accuracy of key frame selection impacts the final total score, but overall, it still provides objective and reliable scoring outcomes for doctors.
In the field of rehabilitation medicine, this study developed an automated upper extremity mobility rehabilitation assessment system, which not only solves the current subjective scoring method of one-on-one rehabilitation assessment, but also allows patients living in remote areas to be assessed at home, which saves time and manpower, and provides objective and reliable assessment results. The Arm Capacity and Movement test (ArmCAM) was used in this study, which has a total of 10 items in the scale. It has a reliable reliability and validity, and was originally designed to be used as a video-viewing manual assessment, so it is feasible to be automated. In this study, the Mediapipe was used to determine the user's movements, and the Yolo object detection was used to determine the subtle joints and the status of the user's movement completion and scoring. After the system was built and tested in the hospital with 22 subjects, the results showed that all the items were similar to the therapist's scores (kappa>0.4), and the correlation coefficient of the total scores reached 0.99, which indicated that the study successfully automated 10 movements by utilizing different combinations of skeleton features and object detection techniques for each of the scoring items of the ArmCAM, thus achieving the goal of the study. The goal of the study was successfully achieved.
This study successfully applied Mediapipe to the Landing Error Scoring System and the post-stroke upper limb movement rehabilitation evaluation, automating the time-consuming and labor-intensive evaluation processes, and providing doctors and therapists with objective and reliable evaluation results.
中文摘要 i
Abstract iv
第 1 章 簡介 1
1.1 人體姿態估計 1
1.1.1 VICON 2
1.1.2 Kinect 2
1.1.3 RGB相機 4
1.2 自動化著地錯誤評分系統 4
1.2.1 研究背景 4
1.2.2 研究動機 7
1.2.3 研究目標 8
1.3 自動化中風上肢動作評估系統 9
1.3.1研究背景 9
1.3.2研究動機 10
1.3.3研究目標 12
第 2 章 文獻探討 13
2.1 人體姿態估計模型比較 13
2.2自動化著地錯誤評分系統 18
2.2.1 LESS預防十字韌帶損傷風險準確性 18
2.2.2 LESS自動評分相關應用文獻 20
2.2.3機器學習簡介與相關應用 22
2.3自動化中風上肢動作復健評估系統 24
2.3.1上肢復健評估工具 24
2.3.2相關應用文獻 25
2.3.3物件辨識模型 27
第 3 章 系統架構與方法 30
3.1自動化著地錯誤評分系統 30
3.1.1軟硬體環境 30
3.1.2研究數據取得 30
3.1.3 LESS評分標準 32
3.1.4研究流程 33
3.1.5影片幀插值 34
3.1.6關鍵幀提取 35
3.1.7 LESS各項評分方式 42
3.2自動化中風上肢動作復健評估系統 51
3.2.1軟硬體環境 51
3.2.2研究數據取得 51
3.2.3研究工具 52
3.2.4系統流程架構 56
3.2.5各項目評分方式 57
第 4 章 實驗結果與討論 79
4.1自動化著地錯誤評分系統 79
4.1.2實驗(一) 影片幀插值對於Initial Contact選擇的影響 79
4.1.3實驗(二) 定位Initial Contact域值與關鍵點選擇 80
4.1.1實驗(三) Initial Contact選擇方法 82
4.1.4實驗(四) 評分項目閾值設定 85
4.1.5實驗(五) 機器學習模型與特徵選擇 90
4.1.6實驗(六) 評分結果分析 96
4.2自動化中風上肢動作復健評估系統 104
4.2.1評估器材設計 104
4.2.2實驗結果分析 108
4.2.3使用者介面 112
4.2.4科技接受模式問卷分析 116
第 5 章 結論與未來展望 123
5.1自動化著地錯誤評分系統 123
5.1.1 結論 123
5.1.2未來展望 125
5.2自動化中風上肢動作復健評估系統 127
5.2.1結論 127
5.2.2未來展望 127
參考文獻 130
附錄 1 137


圖目錄
圖 1 1、VICON使用環境與反光球黏貼示意圖[4] 2
圖 1 2、Kinect[5] 3
圖 1 3、Kinect關鍵點示意圖[6] 3
圖 1 4、前十字韌帶位置圖[8] 5
圖 2 1、OpenPose (a) Body、(b) Hand、(c) Face [29] 15
圖 2 2、OpenPose架構圖[28] 15
圖 2 3、Mediapipe (a) Pose、(b) Hands、(c) Face [30] 16
圖 2 4、detector-tracker setup示意圖 [31] 17
圖 2 5、Blaze Pose tracking network 架構圖[31] 17
圖 2 6、one stage流程示意圖[62] 28
圖 2 7、two stage流程示意圖[62] 28
圖 3 1、雙腳LESS測試示意圖 31
圖 3 2、自動化LESS研究流程圖 34
圖 3 3、Video Frame Interpolation流程圖 35
圖 3 4、跳躍時腳踝X軸差距示意圖 36
圖 3 5、IC計算,(a)各幀腳踝X軸位置、(b)相鄰2幀X軸差距、(c)步驟(3)進行3幀rolling window 37
圖 3 6、「腳踝初始位置」相差超過兩倍「腳尖至腳跟距離」示意圖 37
圖 3 7、Mediapipe偵測錯誤情況示意圖 38
圖 3 8腳踝高度低於2倍「初始腳踝至腳跟距離」示意圖 38
圖 3 9、步驟6示意圖 39
圖 3 10、著地前後膝關節角度變化示意圖 39
圖 3 11、膝關節角度差計算示意圖 40
圖 3 12、腳觸地後,肩膀至膝蓋距離變化示意圖 40
圖 3 13、肩膀與膝蓋距離變化曲線圖 41
圖 3 14、(a)側面相機與(b)正面相機臀部高度位置變化曲線示意圖 42
圖 3 15、(a)項目1使用關鍵點[30]、(b)側面膝關節計算角度 43
圖 3 16、(a)項目2使用關鍵點[30]、(b)側面髖關節計算角度 43
圖 3 17、(a)項目3使用關鍵點[30]、(b)側面大腿相對於軀幹計算角度 44
圖 3 18、(a)項目4使用關鍵點[30]、(b)觸地時足部計算角度 44
圖 3 19、(a)項目5使用關鍵點[30]、(b)中足計算方式示意圖 45
圖 3 20、(a)項目6使用關鍵點[30]、(b)雙肩連線計算角度 45
圖 3 21、項目7使用關鍵點 46
圖 3 22、項目8使用關鍵點 46
圖 3 23、(a)項目9使用關鍵點[30]、(b)足部旋轉計算角度 47
圖 3 24、(a)項目10使用關鍵點[30]、(b)足部旋轉計算角度 47
圖 3 25、(a)項目12使用關鍵點[30]、(b)膝關節彎曲計算角度 48
圖 3 26、(a)項目13使用關鍵點[30]、(b)髖關節計算角度 49
圖 3 27、(a)項目14使用關鍵點[30]、(b)大腿相對於軀幹計算角度 49
圖 3 28、(a)項目15使用關鍵點[30]、(b)膝蓋外展計算角度 50
圖 3 29、評估器材 53
圖 3 30、評估桌布 54
圖 3 31、Mediapipe無名指偵測錯誤示意圖 55
圖 3 32、YOLOv4指尖偵測示意圖 55
圖 3 33、罐蓋轉開狀態示意圖,(a)未轉開罐蓋、(b)已轉動罐蓋、(c)已轉開罐蓋 56
圖 3 34、系統流程架構圖 57
圖 3 35、Hand on top of the head動作流程示意圖 58
圖 3 36、項目一關鍵點,Mediapipe (a) Hand、(b) Pose、(c) Face 58
圖 3 37、手掌錯誤擺放方式,(a)手掌朝前、(b)手掌垂直 59
圖 3 38、Holding a magazine between upper arm and side of body動作流程示意圖 60
圖 3 39、項目二使用關鍵點 60
圖 3 40、Sliding a towel動作流程示意圖 61
圖 3 41、項目三使用關鍵點 62
圖 3 42、Pouring動作流程示意圖 63
圖 3 43、杯子角度計算示意圖 64
圖 3 44、夾住杯口將杯子拿起 65
圖 3 45、項目四使用關鍵點 65
圖 3 46、Opening a jar動作流程示意圖 66
圖 3 47、罐蓋轉開狀態示意圖,(a)未轉開罐蓋、(b)已轉動罐蓋、(c)已轉開罐蓋 66
圖 3 48、Grasping and lifting a can to eye level動作流程示意圖 68
圖 3 49、項目六使用關鍵點,Mediapipe (a) Hand、(b) Pose、(c) Face 68
圖 3 50、未舉起罐子示意圖 69
圖 3 51、舉起罐子示意圖 69
圖 3 52、Grasping and inverting a soup can動作流程示意圖 70
圖 3 53、項目七使用關鍵點,Mediapipe (a) Hand、(b) Pose 70
圖 3 54、罐子拿起狀態,(a)未拿起罐子、(b)拿起罐子 71
圖 3 55、手臂垂直角度 72
圖 3 56、Stacking coins動作流程示意圖 72
圖 3 57、項目八使用關鍵點 73
圖 3 58、疊硬幣順序示意圖,(a)初始狀態、(b)疊一個硬幣、(c)疊兩個硬幣 73
圖 3 59、硬幣重疊示意圖 74
圖 3 60、硬幣移至桌緣拿起示意圖 75
圖 3 61、Manipulating coins動作流程示意圖 75
圖 3 62、硬幣移至指尖標記示意圖 76
圖 3 63、硬幣放於指尖與硬幣掉落標記示意圖 76
圖 3 64、Finger opposition動作流程示意圖 77
圖 3 65、兩指尖觸碰,指尖辨識框重疊示意圖 77
圖 3 66、患者手指控制不佳或無法張開 78
圖 4 1、論文[42]偵測關鍵幀方法流程圖 83
圖 4 2、以1次試跳算一次分數標準答案成績分佈 96
圖 4 3、以3次試跳算一次分數標準答案成績分佈 97
圖 4 4、杯身條狀色紙辨識失敗示意圖 105
圖 4 5、杯身圓點貼紙辨識狀況 105
圖 4 6、旋轉罐蓋示意圖 106
圖 4 7、打開罐蓋示意圖 106
圖 4 8、拿起罐子示意圖 107
圖 4 9、開始動作示意圖 107
圖 4 10、執行動作時頭部下壓示意圖 109
圖 4 11、(a)手放下時手肘角度132.16°、(b)手抬起時手肘角度105.42° 110
圖 4 12、(a)手肘角度146.68°、(b)右圖手肘角度171.25° 110
圖 4 13、物件偵測標記,(a) success、(b) X、(c) drop 111
圖 4 14、硬幣偵測錯誤示意圖 111
圖 4 15、因配戴口罩而造成骨架偵測錯誤 112
圖 4 16、首頁 113
圖 4 17、註冊頁面 113
圖 4 18、即時分析操作頁面 114
圖 4 19、評分結果顯示 114
圖 4 20、匯入介面輸入資訊 115
圖 4 21、匯入介面 115
圖 4 22、同一位受試者評估歷史紀錄 116
圖 4 23、科技接受模式關聯圖 117
圖 5 1、Mediapipe左右腳偵測錯誤示意圖 126
圖 5 2、(a)頭部未下壓與(b)頭部下壓臉部座標點變化示意圖 128


表目錄
表 3 1、自動化LESS軟硬體開發環境 30
表 3 2、雙腳LESS項目評分表 [12] 33
表 3 3、自動化中風上肢動作復健評估系統軟硬體開發環境 51
表 4 1、未進行與進行影片幀插值Initial Contact比較結果 80
表 4 2、以不同角度差定位Initial Contact之準確度 81
表 4 3、Initial Contact不同幀數選擇之準確度 81
表 4 4、利用不同關鍵點計算正面與側面影像相關係數取得Initial Contact之準確度 82
表 4 5、正面/側面自動化選擇Initial Contact各步驟執行結果 85
表 4 6、與先前論文結果比較之正面/側面自動化選擇Initial Contact與標準答案差異表 85
表 4 7、項目2設定不同threshold評分結果分析 87
表 4 8、項目3設定不同threshold評分結果分析 87
表 4 9、項目5設定不同threshold評分結果分析 88
表 4 10、項目7設定不同threshold評分結果分析 89
表 4 11、項目8設定不同threshold評分結果分析 89
表 4 12、項目15設定不同threshold評分結果分析 90
表 4 13、3次試跳算一次成績5-fold cross validation資料分佈表 91
表 4 14、項目11各分數模型交叉驗證資料數量 92
表 4 15、項目11不同機器學習方式模型交叉驗證結果 92
表 4 16、項目16各分數模型交叉驗證資料數量 93
表 4 17、項目16不同特徵擷取方法在不同演算法下模型測試結果 94
表 4 18、項目16特徵擷取方法(3)在不同機器學習方式下模型交叉驗證結果 94
表 4 19、項目17各分數模型交叉驗證資料數量 95
表 4 20、項目17不同特徵擷取方法在不同演算法下模型測試結果 95
表 4 21、項目17特徵擷取方法(2)在不同機器學習方式下模型交叉驗證結果 96
表 4 22、手動選擇關鍵幀1次試跳算一次成績單項評分結果 98
表 4 23、手動選擇關鍵幀3次試跳算一次成績單項評分結果 99
表 4 24、手動選擇關鍵幀LESS總分結果與論文[42]不平衡資料集結果比較 100
表 4 25、手動選擇關鍵幀LESS總分結果與論文[42]平衡資料集結果比較 100
表 4 26、自動選擇關鍵幀1次試跳算一次成績單項評分結果 101
表 4 27、自動選擇關鍵幀3次試跳算一次成績單項評分結果 102
表 4 28、自動選擇關鍵幀LESS總分結果與論文[42]不平衡資料集結果比較 103
表 4 29、自動選擇關鍵幀LESS總分結果與論文[42]平衡資料集結果比較 103
表 4 30、手動與自動選擇關鍵幀結果比較 104
表 4 31、資料數量與評估結果 109
表 4 32、科技接受模式問卷 117
表 4 33、Cronbach’s α結果 119
表 4 34、修正後Cronbach’s α結果 120
表 4 35、科技接受模式問卷各題平均值及標準差 121
[1] A. Rohan, M. Rabah, T. Hosny and S. -H. Kim, "Human Pose Estimation-Based Real-Time Gait Analysis Using Convolutional Neural Network" , in IEEE Access, vol. 8, pp. 191542-191550, 2020.
[2] A. Rahmadani, B. S. Bayu Dewantara and D. M. Sari, "Human Pose Estimation for Fitness Exercise Movement Correction," 2022 International Electronics Symposium (IES), Surabaya, Indonesia, 2022, pp. 484-490.
[3] Tous, R. (2023). "Pictonaut: movie cartoonization using 3D human pose estimation and GANs." Multimedia Tools and Applications: 1-15.
[4] “Vicon Motion Capture System (54 Vicon V-16 Vantage)” Max Planck Institute for Intelligent Systems. https://ps.is.mpg.de/pages/motion-capture
[5] “Hacking the Kinect.” Adafruit. https://learn.adafruit.com/hacking-the-kinect/overview
[6] Ousmer, Mehdi & Vanderdonckt, Jean & Buraga, Sabin. (2019). An ontology for reasoning on body-based gestures. 1-6. 10.1145/3319499.3328238.
[7] L. Yang, L. Zhang, H. Dong, A. Alelaiwi and A. E. Saddik, "Evaluat87ing and Improving the Depth Accuracy of Kinect for Windows v2," in IEEE Sensors Journal, vol. 15, no. 8, pp. 4275-4285, Aug. 2015.
[8] “認識前十字韌帶斷裂:構造與功能.” 骨科線上.https://reurl.cc/gGv9z4
[9] 邱致皓 (2024.04.01) “前十字韌帶斷裂”
長庚醫訊. https://www.cgmh.org.tw/cgmn/category.asp?id_seq=1703167
[10] 孫千惠, et al. (2023). "女性運動員是否有較高運動傷害發生率?-重思運動傷害防護策略." 中華體育季刊(預刊文章): 1-15.
[11] Ardern CL, Taylor NF, Feller JA, Webster KE. Return-to-Sport Outcomes at 2 to 7 Years After Anterior Cruciate Ligament Reconstruction Surgery. The American Journal of Sports Medicine. 2012;40(1):41-48.
[12] Padua DA, Marshall SW, Boling MC, Thigpen CA, Garrett WE, Beutler AI. The Landing Error Scoring System (LESS) Is a Valid and Reliable Clinical Assessment Tool of Jump-Landing Biomechanics: The JUMP-ACL Study. The American Journal of Sports Medicine. 2009;37(10):1996-2002.
[13] Myer, G. D., et al. (2008). "Tuck jump assessment for reducing anterior cruciate ligament injury risk." Athletic therapy today: the journal for sports health care professionals 13(5): 39.
[14] Myer, G. D., et al. (2013). "Clinic-based algorithm to identify female athletes at risk for anterior cruciate ligament injury." The American journal of sports medicine 41(1): NP1-NP6.
[15] Nilstad, A., et al. (2014). "Physiotherapists can identify female football players with high knee valgus angles during vertical drop jumps using real-time observational screening." journal of orthopaedic & sports physical therapy 44(5): 358-365.
[16] McLean SG, Walker K, Ford KR, et al. Evaluation of a two dimensional analysis method as a screening and evaluation tool for anterior cruciate ligament injury. Br J Sports Med. 2005;39(6):355–62.
[17] Onate, J., et al. (2010). "Expert versus novice interrater reliability and criterion validity of the landing error scoring system." Journal of sport rehabilitation 19(1): 41-56.
[18] 邱弘毅” 腦中風之現況與流行病學特徵” 社團法人台灣腦中風學會https://www.stroke.org.tw/GoWeb2/include/index.php?Page=5-1&paper02=4156067525bc96c3a9ee9b
[19] 吳雅瑜、陳信隆” 台灣每年3至5萬人腦中風 醫師:中風呈年輕化趨勢”公視新聞網https://news.pts.org.tw/article/586941
[20] 余勁毅” 年輕型腦中風不再罕見” 東元綜合醫院網站https://www.tyh.com.tw/health/f17.html
[21] “腦中風的復健治療“台中榮民總醫院網站https://reurl.cc/gGv9xN
[22] 邱顯傑” 長照2.0到底有多缺人?照護員不足,醫護人員也不足--治療師告訴你,不願進場的三大理由” https://www.ilong-termcare.com/Article/Detail/832
[23] 國家發展委員會- 人口推估查詢系統https://www.ndc.gov.tw/Content_List.aspx?n=84223C65B6F94D72
[24] Yang CL, Simpson LA, Eng JJ. A Pilot Study for Remote Evaluation of Upper Extremity Motor Function After Stroke: The Arm Capacity and Movement Test (ArmCAM). Am J Occup Ther. 2023 Jan 1;77(1):7701205020.
[25] A. Toshev and C. Szegedy, "DeepPose: Human Pose Estimation via Deep Neural Networks," 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1653-1660.
[26] Xiao, B., et al. (2018). Simple baselines for human pose estimation and tracking. Proceedings of the European conference on computer vision (ECCV).
[27] Pavlakos, G., et al. (2018). Ordinal Depth Supervision for 3D Human Pose Estimation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society: 7307-7316.
[28] Z. Cao, G. Hidalgo, T. Simon, S. -E. Wei and Y. Sheikh, "OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172-186, 1 Jan. 2021.
[29] Zabala, Unai & Rodriguez Rodriguez, Igor & Martinez-Otzeta, Jose Maria & Lazkano, Elena. (2022). Modeling and evaluating beat gestures for social robots. Multimedia Tools and Applications. 81. 10.1007/s11042-021-11289-x.
[30] MediaPipe. Google for Developers. Accessed November 28, 2023. https://developers.google.com/mediapipe
[31] Bazarevsky, V., et al. (2020). "Blazepose: On-device real-time body pose tracking." arXiv preprint arXiv:2006.10204.
[32] Vidanpathirana, M., et al. (2020). "Tracking and frame-rate enhancement for real-time 2D human pose estimation." The Visual Computer 36(7): 1501-1519.
[33] Kim, J.-W., et al. (2023). "Human Pose Estimation Using MediaPipe Pose and Optimization Method Based on a Humanoid Model." Applied Sciences 13(4): 2700.
[34] Chung, J.-L., et al. (2022). "Comparative Analysis of Skeleton-Based Human Pose Estimation." Future Internet 14(12): 380.
[35] Hewett TE, Myer GD, Ford KR, et al. Biomechanical Measures of Neuromuscular Control and Valgus Loading of the Knee Predict Anterior Cruciate Ligament Injury Risk in Female Athletes: A Prospective Study. The American Journal of Sports Medicine. 2005;33(4):492-501.
[36] Smith HC, Johnson RJ, Shultz SJ, et al. A Prospective Evaluation of the Landing Error Scoring System (LESS) as a Screening Tool for Anterior Cruciate Ligament Injury Risk. The American Journal of Sports Medicine. 2012;40(3):521-526.
[37] Padua, D. A., et al. (2015). "The Landing Error Scoring System as a Screening Tool for an Anterior Cruciate Ligament Injury–Prevention Program in Elite-Youth Soccer Athletes." Journal of Athletic Training 50(6): 589-595.
[38] Hanzlíková I, Hébert-Losier K. Is the Landing Error Scoring System Reliable and Valid? A Systematic Review. Sports Health. 2020;12(2):181-188.
[39] Hanzlíková I, Hébert-Losier K. Is the Landing Error Scoring System Reliable and Valid? A Systematic Review. Sports Health. 2020;12(2):181-188.
[40] Mauntel TC, Padua DA, Stanley LE, Frank BS, DiStefano LJ, Peck KY, Cameron KL, Marshall SW. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System. J Athl Train. 2017 Nov;52(11):1002-1009.
[41] Dar G, Yehiel A, Cale' Benzoor M. Concurrent criterion validity of a novel portable motion analysis system for assessing the landing error scoring system (LESS) test. Sports Biomech. 2019 Aug;18(4):426-436.
[42] Hébert-Losier, K., et al. (2020). "The ‘DEEP’ Landing Error Scoring System." Applied Sciences 10(3): 892.
[43] Nazamil, N., Hamid, N.H.A., Sharir, R., Ali, A.M., Osman, R. (2022). Detecting Risk of ACL Injury Using CNN-Expert System. In: Alfred, R., Lim, Y. (eds) Proceedings of the 8th International Conference on Computational Science and Technology. Lecture Notes in Electrical Engineering, vol 835. Springer, Singapore.
[44] S. S. Mazlan, M. Z. Ayob and Z. A. Kadir Bakti, "Anterior cruciate ligament (ACL) injury classification system using support vector machine (SVM)," 2017 International Conference on Engineering Technology and Technopreneurship (ICE2T), Kuala Lumpur, Malaysia, 2017, pp. 1-5.
[45] V. Mandalapu, N. Homdee, J. M. Hart, J. Lach, S. Bodkin and J. Gong, "Developing Computational Models for Personalized ACL Injury Classification," 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Chicago, IL, USA, 2019, pp. 1-4
[46] Taborri, J., et al. (2021). "A Machine-Learning Approach to Measure the Anterior Cruciate Ligament Injury Risk in Female Basketball Players." Sensors 21(9): 3141.
[47] Fugl-Meyer AR, Jääskö L, Leyman I, Olsson S, Steglind S. The post-stroke hemiplegic patient. 1. a method for evaluation of physical performance. Scand J Rehabil Med. 1975;7(1):13-31.
[48] Taub, Edward, et al. "Motor activity log (mal) manual." UAB Training for CI Therapy 1 (2011): 18.
[49] Yozbatiran N, Der-Yeghiaian L, Cramer SC. A Standardized Approach to Performing the Action Research Arm Test. Neurorehabilitation and Neural Repair. 2008;22(1):78-90.
[50] Duncan PW, Goldstein LB, Matchar D, Divine GW, Feussner J. Measurement of motor recovery after stroke. Outcome assessment and sample size requirements. Stroke. 1992 Aug;23(8):1084-9.
[51] See J, Dodakian L, Chou C, et al. A Standardized Approach to the Fugl-Meyer Assessment and Its Implications for Clinical Trials. Neurorehabilitation and Neural Repair. 2013;27(8):732-741.
[52] “VR遊戲裝置讓中風復健不再枯燥艱辛” Ankecare創新照顧網站 https://www.ankecare.com/article/226-15063
[53] Saposnik, Gustavo & Teasell, Robert & Mamdani, Muhammad & Hall, Judith & McILROY, William & Cheung, Donna & Thorpe, Kevin & Cohen, Leonardo & Bayley, Mark. (2010). Effectiveness of Virtual Reality Using Wii Gaming Technology in Stroke Rehabilitation A Pilot Randomized Clinical Trial and Proof of Principle. Stroke; a journal of cerebral circulation. 41. 1477-84. 10.1161/STROKEAHA.110.584979.
[54] S. Lee, Y. -S. Lee and J. Kim, "Automated Evaluation of Upper-Limb Motor Function Impairment Using Fugl-Meyer Assessment," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 1, pp. 125-134, Jan. 2018
[55] Dutta D, Aruchamy S, Mandal S, Sen S. Poststroke Grasp Ability Assessment Using an Intelligent Data Glove Based on Action Research Arm Test: Development, Algorithms, and Experiments. IEEE Trans Biomed Eng. 2022 Feb;69(2):945-954.
[56] Cóias, A. R., et al. (2022). "A low-cost virtual coach for 2D video-based compensation assessment of upper extremity rehabilitation exercises." Journal of NeuroEngineering and Rehabilitation 19(1): 83.
[57] 劉肇川,毛文婷,陳炫. (2022). "Development of an OpenPose-Based Assessment Aid System for Stroke Physical Rehabilitation Training." Artificial Intelligence and Robotics Research 11(03): 299-307.
[58] Ahmed, Tamim, et al. "ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors." Companion Publication of the 25th International Conference on Multimodal Interaction. 2023.
[59] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91.
[60] Li, Y., et al. (2018). "Research on a Surface Defect Detection Algorithm Based on MobileNet-SSD." Applied Sciences 8(9): 1678.
[61] T. -Y. Lin, P. Goyal, R. Girshick, K. He and P. Dollár, "Focal Loss for Dense Object Detection," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 2999-3007
[62] “Two-stage vs One-stage Detectors.“ https://github.com/yehengchen/Object-Detection-and-Tracking/blob/master/Two-stage%20vs%20One-stage%20Detectors.md
[63] A. C. Rios, D. H. dos Reis, R. M. da Silva, M. A. de Souza Leite Cuadros and D. F. T. Gamarra, "Comparison of the YOLOv3 and SSD MobileNet v2 Algorithms for Identifying Objects in Images from an Indoor Robotics Dataset," 2021 14th IEEE International Conference on Industry Applications (INDUSCON), São Paulo, Brazil, 2021, pp. 96-101
[64] M. R. Fairuzi and F. Y. Zulkifli, "Performance Analysis of YOLOv4 and SSD Mobilenet V2 for Foreign Object Debris (FOD) Detection at Airport Runway Using Custom Dataset," 2021 17th International Conference on Quality in Research (QIR): International Symposium on Electrical and Computer Engineering, Depok, Indonesia, 2021, pp. 11-16
[65] H. Lee, T. Kim, T. -y. Chung, D. Pak, Y. Ban and S. Lee, "AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 5315-5324
[66] Davis, Fred D. "Technology acceptance model: TAM." Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption (1989): 205-219.
[67] 歐勁麟(2012)。以科技接受模式探討智慧型手機購買之行為意圖-以iPhone手機為例。﹝碩士論文。國立高雄應用科技大學﹞臺灣博碩士論文知識加值系統。
[68] Boden BP, Torg JS, Knowles SB, Hewett TE. Video Analysis of Anterior Cruciate Ligament Injury: Abnormalities in Hip and Ankle Kinematics. The American Journal of Sports Medicine. 2009;37(2):252-259.
[69] Krosshaug T, Nakamae A, Boden BP, et al. Mechanisms of Anterior Cruciate Ligament Injury in Basketball: Video Analysis of 39 Cases. The American Journal of Sports Medicine. 2007;35(3):359-367.
[70] O’Connor, M.L. (2015) The Development of the Single-Leg Landing Error Scoring System (SL-LESS) for Lower Extremity Movement Screening. Master Thesis, University of Wisconsin-Milwaukee, Milwaukee.
電子全文 電子全文(網際網路公開日期:20260831)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top