跳到主要內容

臺灣博碩士論文加值系統

(44.222.64.76) 您好!臺灣時間:2024/06/14 05:48
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:戴銓成
論文名稱:色彩模型與霍夫轉換於即時人眼追蹤系統之應用
論文名稱(外文):Applying Color Model and Hough Transform in a Real-Time Eye Tracking System
指導教授:賴政良
學位類別:碩士
校院名稱:佛光大學
系所名稱:資訊應用學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:47
中文關鍵詞:眼動追蹤眼睛偵測色彩空間
外文關鍵詞:Eye detectionEye trackingColor space
相關次數:
  • 被引用被引用:0
  • 點閱點閱:646
  • 評分評分:
  • 下載下載:38
  • 收藏至我的研究室書目清單書目收藏:0
眼睛是人臉最顯著的特徵之一,在人類行為中扮演了重要的角色,代表了情緒、需求、渴望、認知與人際互動。透過眼球的動作,可以展現我們對視覺世界的看法與注意力,讓我們蒐集相當多的訊息。
眼動追蹤,英文Eye Tracking(或稱為眼球追蹤、凝視估計Gaze Estimation),藉由測量眼睛注視的位置,或是眼球相對於頭部的動作而進行追蹤。在心理學、教育研究、人機互動、輸入的介面及模擬訓練等,有相當廣泛的應用。眼動追蹤可以分為「侵入式」與「非侵入式」兩大類的方法。
本研究採用非侵入式的方法,僅利用筆記型電腦上方內建的網路攝影機,將拍攝到的即時人臉影像,經由RGB色彩空間轉換到YCbCr色彩空間,藉由選定膚色區域的篩檢,快速排除非人臉區域。再將其餘的影像資訊以高斯影像金字塔,適度的降低資料量並保留足夠的特徵,在考慮臉部及雙眼的相對三角位置關係之後,挑選出符合人臉五官位置關係的眼睛,並針對眼部利用水平、垂直、鄰域均值投影法及霍夫橢圓轉換找出虹膜和瞳孔,然後加以定位瞳孔中心。在不需要透過機器學習人臉、眼部、虹膜的情形下,也不用外接高解析的網路攝影機或是外接螢幕,可即時的得到瞳孔位置並進行眼動追蹤,免除了必須先建立特徵資料庫的麻煩,以及訓練資料所必須花費的時間,大大地提高了使用的便利性,而對於頭部輕微的傾斜也有良好的偵測率。
The eyes are not only the most significant features of the face, but also an important role in human behavior. For example, eyes can express our emotions, needs, aspirations, cognition and interaction. Through the eye movements, we can show our perception and attention, so that we have collected quite a lot of messages about our visual world.
Eye tracking (or Gaze Estimation), by measuring the eye gaze position relative to the head or eye movement and tracking, is applied widely in psychology, like educational research, human-computer interaction, the input interface and simulation training, etc.. About the “Eye tracking” method, it can be classified as "invasive" and "non-intrusive".
The method in this study is “non-invasive”. The research equipment is only the webcam which is on the top of laptop. First step, use the webcam capture the face image. Second step, turn the image from RGB color space into YCbCr color space. Third step, exclude non-face region and let the rest of image information to the “Gaussian image pyramid”, then reduce image data and retains enough features. Forth step, considering the relative position of face and eyes transform as a triangle shape. Fifth step, find the eyes according to the triangle. Sixth step, use horizontal, vertical, neighborhood average projection method and Hough Transform to identify the iris and pupil ellipse. At last, locate the pupil center. In this case, we can get the position of the pupil immediately and make eye tracking efficiently without developing huge feature data and setting external HD webcam or external monitor. Furthermore, it can avoid the trouble of creating a feature data and shorten the time of training data. The best of all, it has a better detection rate which is ignore slight head inclination. In conclusion, the method improves the convenience of use in this study.
摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 V
表目錄 VII
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
第二章 文獻探討 3
2.1 人臉特徵 3
2.2 眼睛偵測 5
第三章 研究方法 10
3.1 影像處理 10
3.1.1 前處理(Preprocessing) 10
3.1.2 色彩空間(Color Space) 14
3.1.3 霍夫轉換(Hough transform) 19
3.2 人臉偵測 22
3.3 眼睛偵測 24
第四章 實驗結果 26
4.1 實驗設備及流程 26
4.2 人臉偵測實驗 27
4.3 眼睛偵測實驗 31
第五章 結論 36
參考文獻 37

圖目錄
圖 1. Haar-like特徵 3
圖 2. 雙眼及嘴部的三角位置關係 4
圖 3. 使用最好的色彩空間在每個圖像上所得到的正確偵測率 4
圖 4. 紅外光源及虹膜上的反光 5
圖 5. 等照度線及圓形偵測 5
圖 6. 以影像投影選定眼睛區域 6
圖 7. Daugman演算法定位虹膜與瞳孔的圓形區域 6
圖 8. 以HSV揀選膚色範圍 7
圖 9. SVM資料分類 7
圖 10. IPF積分投影(虛線)與MNMPF最小鄰域均值投影(實線) 8
圖 11. 眼電圖法 8
圖 12. 左眼虹膜周邊進行累加 9
圖 13. RGB圖轉灰階圖 11
圖 14. 原圖與直方圖等化後的影像及直方圖 11
圖 15. 灰階影像二值化處理 12
圖 16. 膨脹(dilation) 13
圖 17. 侵蝕(erosion) 13
圖 18. 連通法(Connected-Components) 13
圖 19. 高斯金字塔(Gaussian pyramid) 14
圖 20. RGB 色彩立方體 15
圖 21. HSV 模型的圓錐表示 16
圖 22. YCbCr色彩空間膚色分布 17
圖 23. HSV色彩空間膚色分布 17
圖 24. X-Y平面上的點座標轉換到a-b平面 20
圖 25. 原始圖像空間中的點座標轉換到極坐標參數空間 20
圖 26. X-Y平面上的點座標轉換到ρ-θ平面 21
圖 27. 平面上的點與橢圓圓周上的距離 22
圖 28. 雙眼和嘴的三角位置關係 22
圖 29. 三庭五眼 24
圖 30. 影像投影 25
圖 31. 實驗流程圖 27
圖 32. RGB影像轉換到YCbCr色彩空間 28
圖 33. 排除非膚色範圍 28
圖 34. 連續高斯影像金字塔 29
圖 35. 以連通法尋找輪廓及橢圓 29
圖 36. 人臉與雙眼位置關係 30
圖 37. 臉部傾斜狀態示意圖 30
圖 38. 左眼區域轉灰階圖 31
圖 39. 眼窩部位最大鄰域均值積分投影 32
圖 40. 眼窩部位最大鄰域均值積分投影水平垂直峰值 33
圖 41. 影像斷開找到虹膜 33
圖 42. 霍夫轉換之最小橢圓 33
圖 43. 螢幕上九個注視點 34
圖 44. 瞳孔中心定位 34

表目錄
表 1. 不同色彩空間在3個Channel下的正確辨識率 19
表 2. 實驗軟硬體設備 26
表 3. 人臉偵測實驗結果 31
表 4. 眼睛偵測實驗結果 35
表 5. 正確的瞳孔偵測率結果 35
[1] P. Viola and M. J. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, May 2004.
[2] R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in 2002 International Conference on Image Processing. 2002. Proceedings, 2002, vol. 1, pp. I–900 – I–903 vol.1.
[3] C. Lin, “Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network,” Pattern Recognition Letters, vol. 28, no. 16, pp. 2190–2200, Spring 2007.
[4] J. M. Chaves-González, M. A. Vega-Rodríguez, J. A. Gómez-Pulido, and J. M. Sánchez-Pérez, “Detecting skin in face recognition systems: A colour spaces study,” Digital Signal Processing, vol. 20, no. 3, pp. 806–823, 2010.
[5] A. Perez, M. Cordoba, A. Garcia, R. Mendez, M. Munoz, J. Pedraza, and F. Sanchez, “A precise eye-gaze detection and tracking system,” in Proceedings of the 11th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2003.
[6] R. Valenti and T. Gevers, “Accurate eye center location and tracking using isophote curvature,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, 2008, pp. 1 –8.
[7] R. Valenti, J. Staiano, N. Sebe, and T. Gevers, “Webcam-based visual gaze estimation,” Image Analysis and Processing–ICIAP 2009, pp. 662–671, 2009.
[8] K. Peng, L. Chen, S. Ruan, and G. Kukharev, “A Robust Algorithm for Eye Detection on Gray Intensity Face without Spectacles,” Journal of Computer Science &; Technology, vol. 5, no. 3, pp. 127–132, Oct. 2005.
[9] 陳淑斐, 孫振東, 吳憲忠, 杜子佑, and 王太昌, “眼控滑鼠之研究,” in KC2011第七屆知識社群研討會, 2011, p. SYS–03.
[10] H. Razalli, R. W. O. K. Rahmat, and R. Mahmud, “Real–Time Eye Tracking and Iris Localization,” Jurnal Teknologi, vol. 50, pp. 43–57, Jun. 2009.
[11] 林瑞硯, “使用網路攝影機即時人眼偵測與注視點分析,” 國立臺灣師範大學, 台北市, 2010.
[12] Yu-Tzu Lin, Ruei-Yan Lin, Yu-Chih Lin, and Greg C. Lee, “Real-time eye-gaze estimation using a low-resolution webcam,” Multimedia Tools and Applications, Aug. 2012.
[13] C. Cortes and V. Vapnik, “Support-vector networks,” Mach Learn, vol. 20, no. 3, pp. 273–297, Sep. 1995.
[14] 鄭穎 and 汪增福, “最小鄰域均值投影函數及其在眼睛定位中的應用,” Journal of Software, vol. 19, no. 9, pp. 2322–2328, Sep. 2008.
[15] C.-C. Postelnicu, F. Girbacia, and D. Talaba, “EOG-based visual navigation interface development,” Expert Systems with Applications, vol. 39, no. 12, pp. 10857–10866, 15 2012.
[16] K. Yamagishi, J. Hori, and M. Miyakawa, “Development of EOG-based communication system controlled by eight-directional eye movements,” in Engineering in Medicine and Biology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE, 2006, pp. 2574–2577.
[17] N. Cherabit, F. Zohra Chelali, and A. Djeradi, “Circular Hough Transform for Iris localization,” Science and Technology, vol. 2, no. 5, pp. 114–121, Dec. 2012.
[18] J. Kovac, P. Peer, and F. Solina, “Human skin color clustering for face detection,” in EUROCON 2003. Computer as a Tool. The IEEE Region 8, 2003, vol. 2, pp. 144 – 148 vol.2.
[19] V. Vezhnevets, V. Sazonov, and A. Andreeva, “A survey on pixel-based skin color detection techniques,” in Proc. Graphicon, 2003, vol. 3, pp. 85–92.
[20] C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Transactions on Multimedia, vol. 1, no. 3, pp. 264 –277, 1999.
[21] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, no. 4, pp. 551 –564, 1999.
[22] G. Kukharev and A. Nowosielski, “Visitor identification–elaborating real time face recognition system,” Proc. 12th Winter School on Computer Graphics (WSCG), Plzen, Czech Republic, pp. 157–164, 2004.
[23] P. S. Hiremath and A. Danti, “Detection of Multiple Faces in an Image Using Skin Color Information and Lines-of-Separability Face Model,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 20, no. 01, pp. 39–61, Feb. 2006.
[24] Y. Wang and B. Yuan, “A novel approach for human face detection from color images under complex background,” Pattern Recognition, vol. 34, no. 10, pp. 1983–1992, 2001.
[25] R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM, vol. 15, no. 1, pp. 11–15, 1972.
[26] X. Zhou, X. D. Kong, and G. H. Zeng, “Method of Ellipse Detection Based on Hough Transform,” Jisuanji Gongcheng/ Computer Engineering, vol. 33, no. 16, pp. 166–167, 2007.
[27] S. Inverso, Ellipse detection using randomized Hough transform. Technical Report, Department of Computer Science, Rochester Institute of Technology, NY, 2002.
[28] M.-H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34 –58, 2002.
[29] 何光輝, 唐遠炎, 房斌, and 張太平, “圖像分割方法在人臉識別中的應用,” 計算機工程與應用, vol. 46, no. 28, p. 196, 2010.
[30] N. A. Dodgson, “Variation and extrema of human interpupillary distance,” presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 2004, vol. 5291, pp. 36–46.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top