(54.173.237.152) 您好!臺灣時間:2019/02/22 22:37
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
本論文永久網址: 
line
研究生:陳劭昂
研究生(外文):Chen, ShaoAng
論文名稱:即時穿戴式視覺系統之設計與實現
論文名稱(外文):The Design and Implementation of A Real-time Wearable Vision System
指導教授:王元凱王元凱引用關係
指導教授(外文):Wang, YuanKai
口試委員:石勝文張陽郎
口試委員(外文):Shih, Sheng-WenShih, Sheng-Wen
口試日期:100年6月28日
學位類別:碩士
校院名稱:輔仁大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:55
中文關鍵詞:穿戴式視覺顏色分類手勢分類高斯混合模型查表法
外文關鍵詞:Wearable visionColor classificationGesture classificationGaussian mixture modelLUT
相關次數:
  • 被引用被引用:2
  • 點閱點閱:853
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:65
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出一套智慧型可攜裝置,此裝置提供手勢操作介面並兼俱體積小大螢幕顯示應用於影像擷取及管理。此穿戴視覺系統已實作於嵌入式系統同時可以達到即時處理效能,系統之硬體包含ARM及DSP異質雙核心處理器,顯示裝置使用具有小的體積且能投影出大畫面之微型投影機。軟體以方面設計一個三層緩衝機制來更有效管理記憶體,並以模組及管線在執行時可以有效的達到平行處理。實現手勢辨識首先將顏色分類,以期望的最大法(Expectation Maximization)及高斯混合模型(Gaussian Mixture Module)進行顏色訓練。為了改善及提升高斯混合模型的效能,我們設計一個查表法(Look Up Table)技術。使用者手勢命令辨識是利用指套的輪廓、幾和特徵來萃取出指套並符合指定手勢來下達命令。
為了驗證手勢辨識模組其正確性,在實驗中共有8段測試影片,各有400張Frame,這些影片包含了顏色複雜之場景、低光源場景、光源閃爍場景。整個系統包含手勢辨識處理時間每秒可達到22.9FPS,實驗的辨識率為97.5%。實驗結果證明了此小尺寸大螢幕之穿戴式系統能有效實現手勢介面並達到即時處理效能。

This thesis proposes a smart portable device, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dual-core processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally.
In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 97.5% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

摘要 i
英文摘要 ii
誌謝 iii
目錄 iv
表目錄 v
圖目錄 vi
第1章 簡介 1
1.1 研究背景 1
1.2 研究動機 2
1.3 文獻探討 4
1.4 研究目的 6
1.5 論文架構 7
第2章 系統設計.....................................8
2.1 硬體架構(Hardware Architecture)...............8
2.2 軟體架構(Software Architecture)...............10
2.3 硬體/軟體介面(Hardware/Software Interface).....12
第3章 即時手勢辨識..................................14
3.1 顏色分類(Color Classification)................15
3.2 Look up Table................................18
3.3 指套萃取(Fingertip Extraction)................20
3.4 手勢分類(Gesture Classification)..............21
第4章 實驗結果.....................................24
4.1 顏色辨識實驗(Color Recognition Experiment).....28
4.2 手勢辨識實驗(Gesture Recognition Experiment)...34
4.3 時間消耗量(Time Consumption)...................36
4.4 記憶體使用量分析(Memory Usage Analysis).........38
第5章 結論........................................40
參考文獻...........................................41
附錄A.............................................46
附錄B.............................................49
附錄C.............................................52

[1]T. Okuma, T. Kurata, and K. Sakaue, “Real-Time Camera Parameter Estimation from Images for a Wearable Vision System,” in Proc. IAPR Workshop on Machine Vision Applications, pp. 4482-4486, 2000.
[2]K. Oka, Y. Sato, and H. Koike, “Real-time Tracking of Multiple Fingertips and Gesture Recognition for Augmented Desk Interface Systems,” in Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pp. 429-434, 2002.
[3]K. Hu, S. Canavan, and L. Yin, “Hand Pointing Estimation for Human Computer Interaction Based on Two Orthogonal-Views,” in Proc. International Conference on Pattern Recognition, pp. 3760-3763, 2010.
[4]S. Hodges, L. Williams, E. BerryI, S. Izadi, J. Srinivasan, A. Butler, G. Smyth, and N. Kapur, “SenseCam: A Retrospective Memory Aid,” in Proc. International Conference on Ubiquitous Computing, pp. 177-193, 2006.
[5]M. Havlena, A. Ess, W. Moreau, A. Torii, M. Jancosek, T. Pajdla, and L. Van Gool, “AWEAR 2.0 System: Omni-directional Audio-Visual Data Acquisition and Processing,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 49-56, 2009.
[6]G. Balakrishnan, G. Sainarayanan, R. Nagarajan, and S. Yaacob, “Wearable Real-Time Stereo Vision for the Visually Impaired,” Engineering Letters, vol. 14, no. 2, pp. 6-14, 2007.
[7]Y. Liu, X. Liu, and U. Jia, “Hand-Gesture Based Text Input for Wearable Computer,” in Proc. IEEE International Conference on Computer Vision Systems, pp. 8-13, 2006.
[8]R. Grasset, A. Dunser, and M. Billinghurst, “Human-Centered Development of an AR Handheld Display,” in Proc. IEEE and ACM International Symposium, pp. 177-180, 2007.
[9]B. F. Goldiez, A. M. Ahmad, and P. A. Hancock, “Effects of Augmented Reality Display Settings on Human Wayfinding Performance,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 37, no. 5, pp. 839-845, 2007.
[10]J. Yang, W. Yang, M. Denecke, and A. Waibel, “Smart sight: A Tourist Assistant System,” in Proc. Symposium on Wearable Computers, vol. 1, pp. 73-78, Oct. 1999.
[11]T. Brown and R. C. Thomas, “Finger Tracking for the Digital Desk,” in Proc. Australasian User Interface Conference, vol. 1, pp. 11-16, 2000.
[12]A. Wu, M. Shah, and N. D. V. Lobo, “A virtual 3D Blackboard: 3D Finger Tracking Using a Single Camera,” in Proc. IEEE Intnternational Conference Automatic Face and Gesture Recognition, pp. 536-543, 2000.
[13]T. Keaton, S. M. Dominguez, and A. H. Sayed, “Snap&Tell™: A Multimodal Wearable Computer Interface for Browsing the Environment,” in Proc. Intnternational Symp. Wearable Computers, pp. 75-82, Oct. 2002.
[14]S. Dominguez, T. Keaton, and A. Sayed, “A Robust Fnger Tracking Method for Multimodal Wearable Computer Interfacing,” IEEE Transactions on Multimedia, vol. 8, no. 5, pp. 956-972, 2006.
[15]S. Mitra and T. Acharya, “Gesture Recognition: A Survey,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 37, no. 3, pp. 311-324, May 2007.
[16]K. Wang, W. Li, R. F. Li, and L. Zhao, “Real-time Hand Gesture Recognition for Service Robot,” in Proc. International Conference on Intelligent Computation Technology and Automation, vol. 2, pp. 976-979, 2010
[17]S. W. Lee, “Automatic Gesture Recognition for Intelligent Human-Robot Interaction,” in Proc. Seventh International Conference on Automatic Face and Gesture Recognition, pp. 645-650, 2006.
[18]A. Corradini, “Dynamic Time Warping for Off-Line Recognition of a Small Gesture Vocabulary,” in Proc. IEEE ICCV Workshop Recognition, Analysis, and Tracking of Faces and Gestures in RealTime Systems, pp. 82-89, 2001.
[19]R. Cutler, and M. Turk, “View-Based Interpretation of Real-Time Optical Flow for Gesture Recognition,” in Proc. Third IEEE International Conference Automatic Face and Gesture Recognition, pp. 416-421, 1998.
[20]T. Darrell and A. Pentland, “Space-Time Gestures,” in Proc. IEEE Conference Computer Vision and Pattern Recognition, pp. 335-340, 1993.
[21]M. Gandy, T. Starner, J. Auxier, and D. Ashbrook, “The Gesture Pendant: A Self-Illuminating, Wearable, Infrared Computer Vision System for Home Automation Control, and Medical Monitoring,” in Proc. Fourth International Symp. Wearable Computers, pp. 87- 94, 2000.
[22]K. Oka, Y. Sato, and H. Koike, “Real-Time Fingertip Tracking and Gesture Recognition,” IEEE Computer Graphics and Applications, vol. 22, no. 6, pp. 64-71, Dec. 2002.
[23]T. Starner, J. Weaver, and A. Pentland, “Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1371-1375, Dec. 1998.
[24]M. H. Yang, N. Ahuja, and M. Tabb, “Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 24, no. 8, pp. 1061-1074, Aug. 2002.
[25]V. Pavlovic, R. Sharma, and T. Huang, “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 677-695, July 1997.
[26]Y. Cui and J. Weng, “Appearance-Based Hand Sign Recognition from Intensity Image Sequences,” Computer Vision and Image Understanding, vol. 78, no. 2, pp. 157-176, May 2000.
[27]E. Ong and R. Bowden, “A Boosted Classifier Tree for Hand Shape Detection,” in Proc. Sixth IEEE International Conference Automatic Face and Gesture Recognition, pp. 889-894, 2004.
[28]M. Isard and A. Blake, “CONDENSATION-Conditional Density Propagation for Visual Tracking,” International J. Computer Vision, vol. 29, no. 1, pp. 5-28, 1998.
[29]M. Kolsch and M. Turk, “Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration,” in Proc. IEEE Workshop RealTime Vision for Human-Computer Interaction, pp. 158-165, 2004.
[30]N. Stefanov, A. Galata, and R. Hubbold, “Real-Time Hand Tracking with Variable-Length Markov Models of Behaviour,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 3, no. 73-80, 2005.
[31]B. Stenger, A. Thayananthan, P. Torr, and R. Cipolla, “Filtering Using a Tree-Based Estimator,” in Proc. Ninth IEEE International Conference Computer Vision, pp. 1063-1070, 2003.
[32]E. Sudderth, M. Mandel, W. Freeman, T. Freeman, and S. Willsky, “Visual Hand Tracking Using Nonparametric Belief Propagation,” in Proc. IEEE CVPR Workshop Generative Model Based Vision, pp. 189-197, 2004.
[33]F. Chen, C. Fu, and C. Huang, “Hand Gesture Recognition Using a Real-Time Tracking Method and Hidden Markov Models,” Image and Video Computing, vol. 21, no. 8, pp. 745-758, Aug. 2003.
[34]J. Martin, V. Devin, and J. Crowley, “Active Hand Tracking,” in Proc. Third IEEE International Conference Automatic Face and Gesture Recognition, pp. 573-578, 1998.
[35]T. S. Caetano, S. D. Olabarriaga, and D. A. C. Barone, “Do Mixture Models in Chromaticity Space Improve Skin Detection?” Pattern Recognition, vol. 36, no. 12, pp. 3019-3021, 2003.
[36]V. Monga, and R. Bala, “Algorithms for Color Look-Up-Table(LUT) Design via Joint Optimization of Node Locations and Output Values,” in Proc. International Conference on Acoustics, Speech, and Signal Processing, pp. 998-1001, 2010.
[37]M. Mese and P. P. Vaidyanathan, “Look up Table (LUT) Method for Image Halftoning,” in Proc. International Conference on Image Processing, vol. 3, pp. 993-996, 2000.
[38]“BeagleBoard System Reference Manual Rev C4, ” http://beagleboard.org/, 2010.
[39]J. Lincoln, “The Latest Video Projection Can Fit Inside Tiny Cameras or Cellphones Yet Still Produce Big Pictures,” IEEE Spectrum, vol. 47, no. 5, pp. 41-45, 2010.
[40]“FFMpeg,” http://www.ffmpeg.org/, 2010.
[41]G. R. Bradski and A. Zelinsky, “Learning OpenCV-Computer Vision with the OpenCV Library,” IEEE Robotics and Automation Society, vol. 16, no. 3, pp. 100-100, 2009.
[42]“QT,” http://qt.nokia.com/products/, 2010.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔