跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.152) 您好!臺灣時間:2025/11/02 04:53
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蕭博元
研究生(外文):HSIAO, PO-YUAN
論文名稱:運用深度學習與物聯網技術發展蹲踞式起跑培訓分析系統
論文名稱(外文):Developing a Crouch Starting Analysis System by Using Deep Learning Algorithm and IoT Technique
指導教授:胡念祖胡念祖引用關係
指導教授(外文):HU, NIAN-ZE
口試委員:蔡榮發林明華吳純慧
口試委員(外文):TSAI, JUNG-FALIN, MING-HUAWU, CHUN-HUI
口試日期:2018-07-13
學位類別:碩士
校院名稱:國立虎尾科技大學
系所名稱:資訊管理系碩士班
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:51
中文關鍵詞:運動科學物聯網深度學習Mask R-CNN物件追蹤
外文關鍵詞:Sports ScienceIoTDeep LearningMask R-CNNObject Tracking
相關次數:
  • 被引用被引用:0
  • 點閱點閱:398
  • 評分評分:
  • 下載下載:15
  • 收藏至我的研究室書目清單書目收藏:0
本研究以人工智慧(Artificial Intelligence)及物聯網(Internet of Things, IoT)為基礎,運用深度學習與物聯網技術建立蹲踞式起跑之培訓分析系統。
傳統教練在培訓時是用肉眼觀察選手的作用力、姿勢和速度,加上自身經驗及想法,擬定訓練方針,但此方式較主觀。若使用現有的運動科學分析系統,雖然較客觀,但這樣的方式費用昂貴,且需要較多的前置時間。
本研究整合壓力傳感器,經資料擷取(DAQ)及訊號放大器後,透過儀器校正提高精度,收集選手的足部壓力,使用OpenCV偵測瞬時速度,儲存至關聯式資料庫,開發Web Service網頁,視覺化呈現結果。此外,結合高速攝影機,運用Docker建立環境,將Detectron框架實作Mask R-CNN的ResNet101-FPN演算法,辨識足部軌跡,分析步幅與步頻的變化。傳統的方式是跟拍並計算平均步幅與平均步頻,但本研究的方法可以得到更高的精度。結果發現,選手疲憊時,步幅變化會遞增,步頻變化會遞減,當教練得知此情況時,便可針對步頻進行訓練調整,增加速度。
本研究使用Arduino、Raspberry Pi來完成,在維持高精度的分析成果中,又能降低建置成本。本研究之系統能即時回饋選手足部壓力分佈、分段速度曲線、姿勢等數據,透過行動裝置顯示各次訓練比較。並與中區某科技大學田徑隊合作,共同開發此系統,將成果結合培訓,調整訓練課程,最後在全國大專校院運動會上取得不錯的成績。

This study uses Artificial Intelligence and IOT technology to integrate deep learning to establish a training and analysis system for the crouch starting.
In the past, the coaches used the eyes to observe the athletes’ posture, speed and force with their experience and ideas to draft training, this way is more subjective. Use of sports scientific analysis of the instrument, it will be more objective. However, the cost is very expensive and takes long time to install the system.
In this study, we integrate force sensors, DAQ, signal amplifier, and improve sensors’ accuracy after calibration to collect athletes’ foot force. Using OpenCV to detect the athletes’ speed, store the data into relational database. We also established a web service to control and visualize of results. In addition, with the high-speed camera, Docker was used to establish the environment, and the Detectron Framework was implemented as the ResNet101-FPN algorithm of Mask R-CNN to identify the athletes’ stride length and stride frequency. The traditional way is calculating the average stride length and stride frequency by following the athletes. But our method can obtain higher accuracy. The results show that when the athletes was tired, the stride length will increase; on the other hands, the stride frequency will decrease. If the coaches know that, the training course would adjust.
The system is made up of Arduino and Raspberry Pi, which can reduce cost. The system can feedback the force, velocity curve, posture immediately. The athletes or coaches can compare the past test results on mobile device. This study cooperated with the athletics team of university to develop the system, the results combined with training, adjusted training courses. Finally help them to obtain the best reward in the National Intercollegiate Athletic Games.

摘要..........i
Abstract..........ii
誌謝..........iii
目錄..........iv
表目錄..........vii
圖目錄..........viii
第一章 緒論..........1
1.1 研究背景..........1
1.2 研究動機與目的..........1
1.3 研究流程..........2
第二章 文獻探討..........3
2.1 運動科學..........3
2.2 物聯網..........3
2.3 深度學習(Deep Learning)..........4
2.3.1 卷積神經網絡(Convolutional Neural Network)..........4
2.3.2 深度殘差網路 (Deep residual network, ResNet)..........5
2.3.3 特徵金字塔網路 (Feature Pyramid Networks, FPN)..........7
2.3.4 全卷積網路(Fully Convolutional Networks, FCN)..........7
2.3.5 R-CNN(Region-based Convolution Neural Network)..........8
2.3.6 Fast R-CNN..........9
2.3.7 Faster R-CNN..........10
2.3.8 Mask R-CNN..........11
第三章 研究方法..........13
3.1 研究架構..........13
3.2 蒐集資料及解析..........15
3.3 Arduino與壓力傳感器..........15
3.4 Raspberry Pi與PiCamera..........16
3.5 Mask R-CNN網路模型..........17
3.6 平臺呈現..........20
3.6.1 系統傳遞方式..........21
3.6.2 視覺化呈現..........21
第四章 系統實作..........22
4.1 設備規格..........22
4.1.1 壓力傳感器..........22
4.1.2 Raspberry Pi..........22
4.1.3 Camera..........22
4.1.4 訓練伺服器..........22
4.2 系統實作..........23
4.2.1 關聯式資料庫..........23
4.2.2 壓力傳感器資料蒐集..........25
4.2.3 平臺端呈現..........26
4.2.4 容器化技術..........26
4.2.5 Detectron..........28
4.3 訓練模型細節..........28
4.3.1 參數設定..........28
4.3.2 訓練樣本採集..........29
4.3.3 訓練模型..........30
4.3.4 比較模型..........31
4.3.5 軌跡追蹤..........34
4.4 成果展示..........35
4.4.1 視覺化呈現..........35
4.4.2 步幅與步頻..........38
4.4.3 訓練調整..........41
第五章 結論與未來研究建議..........42
5.1 結論..........42
5.2 未來研究建議..........43
參考文獻..........44
Extended Abstract..........47


[1]Corbin, C. B. (1966). The professional process. The Physical Educator, 23, 173-174.
[2]Corbin, C. B. (1993). The field of physical education—Common goals, not common roles. Journal of Physical Education, Recreation & Dance, 64(1), 79-87.
[3]Atzori, L., Iera, A., & Morabito, G. (2010). The internet of things: A survey. Computer networks, 54(15), 2787-2805.
[4]Simard, P. Y., Steinkraus, D., & Platt, J. C. (2003, August). Best practices for convolutional neural networks applied to visual document analysis. In ICDAR (Vol. 3, pp. 958-962).
[5]LeCun, Y., Bottou,L., Bengio, Y., and Haffner, P. (1998), “Gradient-based learning applied to document recognition,” in : Proceedings of the IEEE, 86 (11), pp. 2278-2324.
[6]Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., & Saenko, K. (2014). Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729.
[7]Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
[8]Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).
[9]Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural computation, 18(7), 1527-1554.
[10]Joachims, T. (2006, August). Training linear SVMs in linear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 217-226). ACM.
[11]Kong, T., Yao, A., Chen, Y., & Sun, F. (2016). Hypernet: Towards accurate region proposal generation and joint object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 845-853).
[12]Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International journal of computer vision, 104(2), 154-171.
[13]Nagi, J., Ducatelle, F., Di Caro, G. A., Cireşan, D., Meier, U., Giusti, A., Gambardella, L. M. (2011, November). Max-pooling convolutional neural networks for vision-based hand gesture recognition. In Signal and Image Processing Applications (ICSIPA), 2011 IEEE International Conference on (pp. 342-347). IEEE.
[14]Adit Deshpande. (2016). A Beginner's Guide To Understanding Convolutional Neural Networks Part 2,取自https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/
[15]Barezi, E. J., Kampman, O., Bertero, D., & Fung, P. (2018). Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction. arXiv preprint arXiv:1805.00705.
[16]He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[17]Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
[18]Alexe, B., Deselaers, T., & Ferrari, V. (2012). Measuring the objectness of image windows. IEEE transactions on pattern analysis and machine intelligence, 34(11), 2189-2202.
[19]Otberdout, N., Kacem, A., Daoudi, M., Ballihi, L., & Berretti, S. (2018). Deep Covariance Descriptors for Facial Expression Recognition. arXiv preprint arXiv:1805.03869.
[20]He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017, October). Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on (pp. 2980-2988). IEEE.
[21]Keys, R. (1981). Cubic convolution interpolation for digital image processing. IEEE transactions on acoustics, speech, and signal processing, 29(6), 1153-1160.
[22]Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017, July). Feature pyramid networks for object detection. In CVPR (Vol. 1, No. 2, p. 4).
[23]ir413. Rbgirshick. Ashwinb. KaimingHe. Shenyunhang. Juggernaut93. roytseng-tw. Yangqing. Gadcam. agrimgupta92. Katotetsuro.(2018). facebookresearch/Detectron,取自https://github.com/facebookresearch/Detectron
[24]Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
[25]Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
[26]OpenCV (2011).OpenCV,2018年07月10日,取自https://opencv.org/
[27]LabelMe (2015).LabelMe,2018年07月10日,取自https://github.com/wkentaro/labelme
[28]Tomasz Grel(2017).Region of interest pooling explained,2017年02月28日,取自https://deepsense.ai/region-of-interest-pooling-explained/
[29]cs231n(2018). Convolutional Neural Networks (CNNs / ConvNets),2018年07月14日,取自http://cs231n.github.io/convolutional-networks/#pool
[30]IAAF(2002).Official handbook 2002-2003(pp.112-113).
[31]卓俊伶(2011)。體育與運動科學研究現況的批判與省思。體育學報, 44(3), 315-332。
[32]劉淑華(2006a)。短跑選手步幅, 步頻與平均速度之相關研究。輔仁大學體育學刊,(5), 171-184。
[33]劉淑華(2006b)。大專甲, 乙組選手百公尺跑速度之比較分析。運動教練科學, (6), 23-29。
[34]宋旭敏,任冀軍(2004)。中、外優秀男子100m 跑運動員成績差距的比較。體育與科學, 25(6), 75-78。
[35]施芹(1999)。談談影響我國短跑運動成績的幾個因素。徐州教育學院學報, (4), 125-126。
[36]劉欣儀(2010)。物聯網全球佈局與未來發展挑戰。臺灣經濟研究月刊, 33(12), 119-127。
[37]李尹鑫,相子元(2016)。穿戴科技於運動科學之應用。中華體育季刊, 30(2), 121-127。
[38]李文姬(2005)。木球之運動科學應用與技術之探討。大專體育, (79), 22-28。
[39]劉豔春(2011)。遠度專案踏跳越線犯規顯示儀的研發(Master's thesis, 河北師範大學).
[40]許樹淵(1996)。田徑論。臺北市:偉彬體育研究社。
[41]沈予涵(2017)。基於卷積神經網絡之乳房彈性超音波影像電腦輔助診斷。臺灣大學生醫電子與資訊學研究所學位論文。臺北市。
[42]藍偉任(2017)。應用卷積神經網絡於支氣管超音波影像診斷。臺灣大學生醫電子與資訊學研究所學位論文。臺北市。
[43]陳君函(2017)。從能量層面探討卷積神經網絡及其架構。清華大學資訊工程學系所學位論文。新竹市。
[44]張守德(2017)。物聯網與深度學習影響下伺服器的演進與未來發展。輔仁大學科技管理學程碩士論文。新北市。

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊