(3.236.222.124) 您好!臺灣時間:2021/05/13 00:54
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:田孟學
研究生(外文):Meng Syue Tian
論文名稱:基於ToF攝影機之手勢辨識技術
論文名稱(外文):Hand Recognition based on ToF Camera
指導教授:陳彥霖陳彥霖引用關係
指導教授(外文):Yen Lin Chen
口試委員:李柏森金凱儀楊士萱
口試委員(外文):Yen Lin ChenYen Lin ChenYen Lin Chen
口試日期:2016-07-25
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:104
中文關鍵詞:EMDToF手勢辨識人機互動
外文關鍵詞:Earth Mover’s Distance(EMD)Time of Flight (ToF)Gesture RecognitionHuman-computer interaction
相關次數:
  • 被引用被引用:1
  • 點閱點閱:236
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
手勢辨識的領域中,有很多種方法來達成辨識的目的。比較常見的方法有:Neural Network(NN)、Support Vector Machine(SVM)、Hidden Markov Model(HMM)…....等等,而影像輸入方面,多數的論文都是結合RGB以及深度影像,能較精確的取得手部區域,但是這樣卻需要更多的額外設備才能達成目的。
本論文以Time-of-Flight(ToF)深度攝影機來實作手勢辨識演算法,只以其所提供的深度影像來進行影像處理與辨識。參考論文方法必須佩帶手環,以精準的擷取手部區域。接著計算手部輪廓以及掌心的距離與角度特徵直方圖,最後以Earth Mover’s Distance(EMD)演算法計算出一個EMD cost,EMD cost越低則表示兩個影像越相似,如此便能比較使用者的手勢與資料庫的哪種手勢類型相同,最後判斷出目前輸入影像的手勢。本論文嘗試改良參考論文之方法,以演算法來計算出手腕的切割點,系統能依照計算出來的切割點,擷取出除了手臂以外的手掌區域,這樣就不需要佩帶手環。除此之外,本論文加入指尖偵測的演算法,判別指尖的數量,讓系統能快速比較EMD,大幅減少運算時間。本論文可以在不同使用者的情況下達到平均90%以上的辨識率,而在嵌入式平台的執行速度平均為每秒5張frame。
The current gesture recognition methods mostly adopt the classification-based approaches, such as : Neural Network(NN)、Support Vector Machine(SVM)、Hidden Markov Model(HMM) etc. As for the input image features, most research studies combined the color and depth images (ex. RGB-D) to obtain more accurate information of hand area, and such techniques may cost high computational resources and energy consumptions.
To provide a low-cost gesture recognition method for wearable devices, this thesis used merely the Time-of-Flight depth camera to achieve a lightweight gesture recognition method. In most traditional gesture recognition methods, users have to wear gloves or bracelets to let depth cameras being able to accurately capture hands areas, and so that the hand contours, palm’s distances, and angle feature can be obtained. Moreover, the Earth Mover’s Distance(EMD) algorithm, which is adopted in most gesture recognition approaches, costs high computational times. In this study, to avoid to wear gloves or bracelets, we propose a new algorithm that can compute the wrist cutting edges and capture the palm areas. In addition, this thesis proposes an efficient finger detection algorithm to judge the number of fingers, and significantly reduce the computing times. In the experimental results, our proposed method achieves a recognition rate of 90% and the performance has 5 frames per second on NVIDIA TX1 embedded platforms.
摘 要 I
ABSTRACT II
誌 謝 IV
目 錄 V
表目錄 VII
圖目錄 IX
第一章 介紹 1
1.1 研究動機 1
1.2 研究背景 2
1.2.1 PMD Nano深度攝影機 3
1.2.2 Earth Mover’s Distance 5
1.2.3 Finger-Earth Mover’s Distance 7
1.3 研究目的 9
1.4 論文架構 10
1.5 本論文貢獻 10
第二章 研究方法 13
2.1 系統架構 13
2.2 正規化與二值化深度影像 14
2.3 影像前處理 15
2.4 手掌中心偵測 16
2.5 手指偵測 17
2.6 靜態手勢辨識方法 20
2.6.1 Hand Segmentation 21
2.6.2 特徵擷取與EMD計算 23
2.6.3 空洞偵測 31
2.7 動態手勢辨識方法 32
2.8 靜態手勢判斷 34
2.8.1 EMD判斷 36
2.8.2 空洞判斷 42
第三章 實驗結果與分析 43
3.1 實驗環境 43
3.2 本論文定義之手勢 44
3.3 實驗結果 47
3.3.1 不同EMD門檻值 47
3.3.2 不同距離實驗 56
3.3.3 不同使用者 78
3.3.4 動態手勢 82
3.3.5 嵌入式平台實驗結果 83
3.4 文獻比較 84
3.5 實驗結論 85
第四章 結論與未來工作 88
4.1 結論 88
4.2 未來工作 88
參考文獻 90
[1] Wiki, ” Human–computer interaction”
https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction
[2] Wiki, ” 消費電子展”
https://zh.wikipedia.org/wiki/%E6%B6%88%E8%B2%BB%E9%9B%BB%E5%AD%90%E5%B1%95
[3] BBC News, ” CES 2016: BMW shows off gesture-controlled concept car”
http://www.bbc.com/news/technology-35258680
[4] Wiki, ” 虛擬實境”
https://zh.wikipedia.org/wiki/%E8%99%9A%E6%8B%9F%E7%8E%B0%E5%AE%9E
[5] eyeSight, “eyeSight | ABOUT US”
http://eyesight-tech.com/about-us-2/
[6] TechCrunch, “EyeSight demos VR gesture control using standard phone hardware”
https://techcrunch.com/2016/05/17/eyesight-vr/
[7] Ji-Hwan Kim, “3-D Hand Motion Tracking and Gesture Recognition Using a Data Glove”, IEEE International Symposium on Industrial Electronics, pp.1013-1018, 2009.
[8] Feng-Sheng Chen, Chih-Ming Fu, Chung-Lin Huang, “Hand gesture recognition using a real-time tracking method and hidden Markov models”, Image and vision computing, Vol 21, issue 8, pp. 745-758, 2003.
[9] Deng-Yuan Huang, Wu-Chih Hu, Sung-Hsiang Chang, “Vision-based Hand Gesture Recognition Using PCA+Gabor Filters and SVM”, Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 09, pp. 1-4, 2009.
[10] Kouichi Murakami, Hitomi Taguchi, “Gesture recognition using recurrent neural networks”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 237-242, 1991.
[11] Zhou Ren, Junsong Yuan, Zhengyou Zhang, “Robust Hand Gesture Recognition Based on FingerEarth Mover’s Distance with a Commodity Depth Camera”, Proceedings of the 19th ACM international conference on Multimedia, pp.1093-1096, 2011.
[12] Kincet官方網站 , “Kinect for Xbox One”
http://www.xbox.com/zh-TW/xbox-one/accessories/kinect-for-xbox-one#fbid=h3t7HAsFvT9
[13] PMD 官方網站, “PMD Nano”
http://www.pmdtec.com/products_services/reference_design.php
[14] YOSSI RUBNER, CARLO TOMASI AND LEONIDAS J. GUIBAS, “The Earth Mover’s Distance as a Metric for Image Retrieval”, International Journal of Computer Vision, Vol 40, Issue 2, pp 99–121, 2000.
[15] 孤独剑客zzy , “[转]Earth Movers Distance (EMD)”
http://www.cnblogs.com/jackyzzy/p/3314667.html
[16] 人工知能に関する断創録, “Earth Movers Distance (EMD)”
http://aidiary.hatenablog.com/entry/20120804/1344058475
[17] Michael Van den Bergh, “Combining RGB and ToF Cameras for Real-time 3D Hand Gesture Interaction”, IEEE Workshop on Applications of Computer Vision (WACV), pp.66-72, 2011.
[18] 賴彥成, “基於深度影像與膚色之即時手勢辨識技術”, 國立國立臺北科技大學, 資訊工程系研究所, 2013.
[19] OpenNI官方網站, “OpenNI SDK”
http://openni.ru/openni-sdk/
[20] Andrew W. Fitzgibbon, Robert B. Fisher. “A Buyer’s Guide to Conic Fitting”, Proc.5th British Machine Vision Conference, Birmingham, pp. 513-522, 1995.
[21] Satoshi Suzuki. and KeiichiA be, “Topological Structural Analysis of Digitized Binary Images by Border Following”, Computer Vision, Graphics, and Image Processing, Vol 30, issue 1, pp 32-46, 1985.
[22] Wiki, “Ramer–Douglas–Peucker algorithm”
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
[23] Jack Sklansky, “Finding the Convex Hull of a Simple Polygon”, Pattern Recognition Letters, Vol 1 issue 2, pp 79-83, 1982.
[24] K. K. Biswas, “Gesture recognition using Microsoft Kinect®”, 5th International Conference on Automation, Robotics and Applications (ICARA), pp. 100-103, 2011.
[25] Wiki, “Support vector machine”
https://en.wikipedia.org/wiki/Support_vector_machine
[26] Wiki,” Time of flight”
https://en.wikipedia.org/wiki/Time_of_flight
[27] Cem Keskin, Furkan Kırac¸, Yunus Emre Kara and Lale Akarun, “Real time hand pose estimation using depth sensors”, Consumer Depth Cameras for Computer Vision, pp. 119-137, 2013.
[28] Nvidia, “Jetson TX1 嵌入式系統模組| NVIDIA Jetson | NVIDIA”
http://www.nvidia.com.tw/object/jetson-tx1-module-tw.html
電子全文 電子全文(網際網路公開日期:20210823)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔