跳到主要內容

臺灣博碩士論文加值系統

(44.192.20.240) 您好!臺灣時間:2024/02/24 23:43
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:洪兆欣
研究生(外文):Chao-Hsin Hung
論文名稱:以軌跡辨識為基礎之手勢辨識系統
論文名稱(外文):A Trajectory-based Approach to Gesture Recognition
指導教授:蘇木春蘇木春引用關係
指導教授(外文):Mu-Chun Su
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:中文
論文頁數:62
中文關鍵詞:自我組織特徵映射圖網路手語辨識動態手勢辨識適應共振理論
外文關鍵詞:character recognitiondynamic gesture recognitionself-organizing feature maps (SOM)adaptive resonance theory (ART)sign language recognition
相關次數:
  • 被引用被引用:32
  • 點閱點閱:1413
  • 評分評分:
  • 下載下載:274
  • 收藏至我的研究室書目清單書目收藏:5
手勢辨識系統可廣泛應用於人機介面設計、醫療復健、虛擬實境、數位藝術創作與遊戲設計等領域,尤其是手語辨識系統,更是需要搭配準確且可行的手勢辨識系統。
本論文提出SOMART演算法,將動態手勢辨識之問題轉換為軌跡辨識問題處理。SOMART演算法主要包含兩個步驟,首先,將多維的手勢資訊利用SOM網路作基本手形分類器並投影至二維平面中。接著,將前一步驟所產生的平面軌跡輸入至改良後的ART網路做圖樣的辨識以辨識動態手勢。另外,我們利用「以軌跡辨識為基礎」的辨識概念,進行手部移動軌跡辨識,同樣可解決動態時序資料辨識的問題。
結果驗證部份,我們定義47種靜態手勢、103種動態手勢及八種手部移動軌跡,分別請十位使用者錄製手部移動軌跡資料,整體資料庫的數量為4650筆資料。靜態手勢的平均辨識率為92%,動態手勢的平均辨識率為88%。而手部移動軌跡的平均辨識率達99%。
Gesture recognition is needed for a variety of applications such as human-computer interfaces, communication aids for the deaf, etc. In this thesis, we present a SOMART system for the recognition of hand gestures. The sequence of a hand gesture is first projected into a 2-dimensional trajectory on a self-organizing feature map (SOM). Then the problem of recognizing hand gestures is transformed to the problem of recognizing hand-written characters. The adaptive resonance theory (ART) algorithm generates multiple templates for each hand gesture. Finally, an unknown gesture is classified to be the gesture with the maximum similarity in the vocabulary via a template matching technique. In addition, the conception of SOMART system can also apply to hand movement trajectory recognition.
A database consisted of 47 static hand gestures, 103 dynamic hand gestures, and eight movement trajectories was tested to demonstrate the performance of the proposed method. The average recognition rate of static hand gestures is 92%, the recognition rate of dynamic hand gestures is 88%, and 99% for hand movement trajectories.
摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VII
表目錄 IX
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 相關研究介紹 4
2.1 手勢定義 4
2.2 輸入元件介紹 5
2.2.1 手勢辨識輸入元件介紹 5
2.2.2 手部移動軌跡辨識輸入元件介紹 6
2.3 手勢相關研究 7
2.4 以軌跡辨識為基礎之辨識方法 10
第三章 硬體設計 12
3.1 資料手套介紹 12
3.2 資料手套設計 14
3.2.1 設計重點 14
3.2.2 元件介紹 15
3.2.3 電路設計 16
第四章 辨識方法及步驟 19
4.1 SOMART手勢辨識系統架構 19
4.2 靜態手勢辨識 20
4.2.1 自我組織特徵映射圖網路(SOM)演算法介紹 20
4.2.2 以SOM為基礎之靜態手勢辨識 21
4.2.2.1 參數設定 21
4.2.2.2 進行訓練 23
4.3 動態手勢辨識 29
4.3.1 適應共振理論 (ART) 演算法介紹 29
4.3.2 以ART演算法為基礎之動態手勢辨識 32
4.3.2.1 產生軌跡樣本 32
4.3.2.2 改良部分 33
4.3.2.3 訓練階段 35
4.3.2.4 動態手勢辨識 37
4.4 SOMART演算法 38
4.5 移動軌跡辨識 39
4.5.1 資料擷取 39
4.5.2 前處理 40
4.5.2.1 平滑化 40
4.5.2.2 維度縮減 41
4.5.3 移動軌跡辨識 42
4.5.3.1 定義基本方向向量與移動軌跡 42
4.5.3.2 定義二維特徵圖(2D Map) 44
4.5.3.3 進行辨識 44
第五章 實驗結果 47
5.1 靜態手勢辨識結果 47
5.2 動態手勢辨識結果 52
5.3 手部移動軌跡辨識結果 55
第六章 結論與未來展望 56
6.1 結論 56
6.2 未來展望 57
參考文獻 58
[1]M. Bichsel, editor. Proceedings of the International Workshop on Automatic Face- and Gesture-Recognition. Zürich, Switzerland, 1995.
[2]G. A. Carpenter and S. Grossberg, “A massively parallel architecture for a self-organizing neural pattern recognition machine,” Computer Vision Graphics Image Process, vol. 37, pp. 54-115, 1987.
[3]G. A. Carpenter and S. Grossberg, “ART 2: Self-organization of stable category recognition codes for analog input patterns,” Appl. Opt. vol. 26, pp. 4919-4930, 1987.
[4]G. A. Carpenter and S. Grossberg, “The ART of adaptive pattern recognition by a self-organization neural network,” Computer, vol. 21, no. 3, pp. 77-88, 1988.
[5]G. A. Carpenter and S. Grossberg, “ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures,” Neural Networks, vol. 3, no. 2, pp. 129-152, 1990.
[6]G. A. Carpenter, S. Grossberg, and D. B. Rosen, “Fuzzy ART: fast stable learning and categorization of analog patterns by an adaptive resonance system,” Neural Networks, vol. 4, pp. 759-771, 1991.
[7]G. S. Carpenter, S. Grossberg, and J. H. Reynolds, “ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network,” Neural Networks, vol. 4, pp. 565-588, 1991.
[8]M. Hasanuzzaman, V. Ampornaramveth, T. Zhang, M.A. Bhuiyan, Y. Shirai, and H. Ueno, “Real-time Vision-based Gesture Recognition for Human Robot Interaction,” in ROBIO 2004. IEEE International Conference on Robotics and Biomimetics, pp. 413-418, 2004.
[9]F. G. Hofmann. “Entwurf und implementierung einer ultraschallbasierten stellungserkennung für den TUB-Sensorhandschuh,” Diplomarbeit am institut, Technische Informatik der TU Berlin, June 1993.
[10]Z. Huang and A. Kuh, “A combined self-organizing feature map and multilayer perceptron for isolated word recognition,” IEEE Trans. on Signal Processing, vol. 40, no. 11, pp. 2651-2657, 1992.
[11]T. Kohonen, Self-organizing and Associate Memory, Springer-Verlag, London, 3rd ed., 1989.
[12]T. Kohonen, Self-organizing Maps, Springer-Verlag, Berlin, 1995.
[13]T. Kohonen, E. Oja, O. Simula, A. Visa, and T. Kangas, “Engineering application of the self-organizing map,” Proceeding of The IEEE, vol. 84, no.10, pp. 1358-1383, 1996.
[14]T. Kohonen, “Improved versions of learning vector quantization,” IJCNN, vol. 1, pp. 545-550, 1990.
[15]M. V. Lamar and M. Shoaib, “Temporal series recognition using a new neural network structure T-CombNET,” Proceeding of the sixth IEEE ICONIP’99, vol. 3, pp. 1112 –1117, 1999.
[16]R. H. Liang and M. Ouhyoung, “A real-time continuous gesture recognition system for sign language,’’ Proceeding of the IEEE International Conference on Automatic Face and Gesture Recognition, vol. 3, pp. 558 –567, 1998.
[17]G. Fang, W. Gao, and D. Zhao, “Large vocabulary sign language recognition based on fuzzy decision trees,” IEEE Trans. on Systems, Man, and Cybernetics-Part A, vol. 34, pp. 305-314, 2004.
[18]W. Gao, G. Fang, D. Zhao, and Y. Chen, “A Chinese sign language recognition system based on SOFM/SRN/HMM,” Pattern Recognition, vol. 37, pp. 2389-2402, 2004.
[19]K. Grobel and M. Assan, “Isolated sign language recognition using hiddenMarkov models Systems,’’ Proceeding of the IEEE International Conference onComputational Cybernetics and Simulation, vol. 1, pp. 162-167, 1997.
[20]C. S. Lee and Z. Ien, “Real-time recognition system of Korean sign language based on elementary components,’’ Proceedings of the Sixth IEEE International Conference on Fuzzy Systems, vol. 3, pp. 1463-1468, 1997.
[21]V. M. Mantyla and J. Mantyjarvi, “Hand gesture recognition of a mobile device user Multimedia and Expo,’’ Proceeding of the IEEE International Conference on E. Multimedia and Expo, vol. 1, pp. 281-284, 2000.
[22]B. W. Min and H. S. Yoon, “Hand gesture recognition using hidden Markov models Systems,’’ Proceeding of the IEEE International Conference on Computational Cybernetics and Simulation, vol. 5, pp. 4232-4235, 1997.
[23]M. Jiyong and G. Wen, “A continuous Chinese sign language recognition system,’’ Proceeding of the IEEE International Conference on Automatic Face and Gesture Recognition, vol. 4, pp. 428-433, 2000.
[24]H. Ohno and M. Yamamoto, “Gesture recognition using recognition techniques on two-dimensional eigenspace,” Proceedings of 7th International Conference on Computer Vision, pp. 151-156, 1999.
[25]J.P Wachs, H. Stern, and Edan, “Cluster labeling and parameter estimation for the automated setup of a hand-gesture recognition system,” IEEE Trans. on Systems, Man, and Cybernetics, Part A, vol. 35, pp. 932-944, Nov. 2005.
[26]L. K. Simone and D. G. Kamper, “Design considerations for a wearable monitor to measure finger posture.” Journal of NeuroEngineering and Rehabilitation, 2005.
[27]M. C. Su, “A speaking aid for the deaf using neural networks for the deaf,” Biomedical Engineering – Applications, Basis & Communications, vol. 8, no. 4, pp. 33-39, 1996.
[28]M. C. Su, “A fuzzy rule-based approach to spatio-temporal hand gesture recognition,” IEEE Trans. on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 30, no. 2, pp. 276-281, 2000.
[29]M. C. Su, Y. X. Zhao, H. Huang, and H. F. Chen, “A Fuzzy Rule-Based Approach to Recognizing 3D Arm Movements,” IEEE Trans. on Neural Systems and Rehabilitation Engineering, vol. 9, no. 2, pp. 191-201, 2001.
[30]M. C. Su and H. C. Chang, “Fast self-organizing feature map algorithm,” IEEE Trans. on Neural Networks, vol. 13, no. 3, pp. 721-733, 2000.
[31]V. Tartter and K. Knowlton. “Perception of sign language from an array of 27 moving spots,” Nature 289, pp. 676-678, 1981.
[32]J. Weissmann and R. Salomon, “Gesture recognition for virtual reality applications using data gloves and neural networks,” Proceeding of the International Joint Conference on Neural Networks, vol. 3, pp. 2043-2046, 1999.
[33]VPL Research Inc., DataGlove Model 2 User’s Manual, Redwood City, CA, 1987.
[34]Virtex Co., Company brochure, Stanford, CA, October, 1992.
[35]Kaiser, Polhemus, 3 Space user’s manual, A Kaiser Aerospace & Electronics Company, 1987.
[36]Image Company, Staten Island NY, Available: http://www.imagesco.com/catalog/flex/FlexSensors.html
[37]王國榮,「基於資料手套的智慧型手勢辨識之廣泛研究」,國立臺灣科技大學電機工程所碩士論文,民國九十年。
[38]辛柏陞,「虛擬實境手部功能訓練系統之設計開發與成效探討之研究」,國立中央大學機械工程研究所博士論文,民國九十四年一月。
[39]范揚平,「電腦簡報系統中以手勢替代滑鼠做操控功能」,國立交通大學資訊工程研究所碩士論文,民國八十六年。
[40]郭建志,「用於手語溝通辨識系統之USB介面資料手套研製」,南台科技大學電機工程研究所碩士論文,民國九十三年。
[41]趙于翔,「可攜式台灣手語發聲系統」,淡江大學電機工程研究所碩士論文,民國九十一年五月。
[42]史文漢、丁立芬,「手能生橋I~II」,民國八十二年。
[43]陳美琍、湯金蓮,「手語大師I~III」,民國八十六年。
[44]陳美琍、湯金蓮,「文字手語書」,民國八十七年。
[45]湯金蓮,「我的第一本手語書」,民國八十七年。
[46]林寶貴,「手語畫冊I~II」,民國八十八年。
[47]王曉書,「天使之翼:曉書手語」,民國九十二年。
[48]吳成柯、程湘君、戴善榮、雲立實,「數位影像處理」,民國九十二年。
[49]蘇木春、張孝德,「機器學習:類神經網路、模糊系統以及基因演算法則」,全華科技圖書,民國九十三年。
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top