跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.170) 您好!臺灣時間:2025/12/01 05:51
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林祐聖
研究生(外文):Yu-Sheng Lin
論文名稱:自動眼睛偵測及眼鏡反光消除
論文名稱(外文):Automatic Eye Detection and Reflection Separation within Glasses
指導教授:張志永
指導教授(外文):Jyh-Yeong Chang
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電機與控制工程系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:英文
論文頁數:79
中文關鍵詞:眼睛偵測反光消除
外文關鍵詞:eye detectionreflection separation
相關次數:
  • 被引用被引用:1
  • 點閱點閱:888
  • 評分評分:
  • 下載下載:122
  • 收藏至我的研究室書目清單書目收藏:0
眼睛偵測經常用在許多研究上,如:人臉辨識、瞌睡偵測、人眼注視追蹤等等;但是,當佩戴眼鏡時,眼睛偵測常會因為眼鏡本身顏色及鏡片上反光的干擾,造成誤判,因此需要對相關的干擾作處理。在本篇論文中,我們提出一套演算法,使其能夠在一張人臉影像中偵測眼睛的位置,並且在該人臉有戴眼鏡時,去除眼鏡及眼鏡內反光的干擾。此系統包含三個模組:臉部位置偵測,眼睛偵測,以及當受測者有戴眼鏡時,去除眼鏡反光的方法。首先,我們使用通用膚色圖偵測臉部區域,以確保在周遭光源條件變化時,仍有足夠的適應性。接著,我們採用邊緣及角的偵測再搭配非等向性擴散轉換,來偵測眼睛區域以及分離出眼鏡反光區域內關於眼睛的資訊。反光分離的原理是根據當反光影像可以成功的分離成兩張分別為前景層圖及反射層圖時,則該兩張成功分離影像的邊緣和角的總和,是所有可能分解的解中最小值。結果顯示,該反光原理用於鏡片上的反光分離處理是有效的且可以得到良好的反光分離。
Eye detection has been applied to many applications, for instance, human or faces recognition, eye gaze detection, drowsiness detection, and so on. However, eye detection often misdiagnoses for the interference caused by glasses when one wears spectacles. This thesis addresses an algorithm to automatically detect the eye location from a given face image and separate reflections within glasses while one has worn glasses. Our system consists of three modules: face segmentation, optic-area detection, and the separating of glasses reflections while one has worn glasses. First, we use the universal skin-color map to detect the face regions, which can ensure sufficient adaptability to ambient lighting conditions. Then, we proposed a novel method to detect the eye region and separate the reflection within glasses based on edge detection, corner detection, and anisotropic diffusion transform. The principle of separating reflection is based on that the correct decomposition of the reflection image whose summation of corners and edges is the smallest among all possible decompositions. The simulation and results demonstrate that the principle of separating reflection can be applied to the reflection within glasses effectively and result in good reflection separation.
Contents

摘要 ………………………………….……………………………………………… i
ABSTRACT ………………………………………………………………………… ii
ACKNOWLEDGEMENT …………………………………………………………iv
CONTENTS ………………………………………………………………….…….. v
LIST of FIGURES ………………………………………………………….…… vii
LIST of TABLES …………………………………………………………………. x

CHAPTER 1 INTRODUCTION ………………………………………………... 1
1.1 Motivation of This Research ……………………………………………........ 1
1.2 Face Detection ……………………………………………………………….. 2
1.3 Eye Location and Glasses Existence Detection ………………………….... 4
1.4 Glasses Reflection Separation System ………….…………………………. 5
1.5 Flowchart of Eye Detection and Glasses Reflection Separation System .…… 6
1.6 Thesis Outline …………………………………………………..…………… 7

CHAPTER 2 FACE SEGMENTATION ………………………………………. 8
2.1 Introduction ………………………………………………………………… 8
2.2 Face Segmentation Algorithm ….………………………………………… 8

CHAPTER 3 EYE DETECTION, GLASSES EXISTENCE DETECTION
and SEPARATING REFLECTION within GLASSES ….. 16
3.1 Introduction …………………...………………………………………......... 16
3.1.1 Edge Detection ……………………………………………………... 16
3.1.2 Corner Detection …………………………………………………… 20
3.1.3 Anisotropic Diffusion ………………………………………………. 23
3.2 Eye Detection and Glasses Existence Detection ………………………….. 27
3.2.1 Eyeball Extraction with Glasses ……………………………………. 35
3.3 Reflection Separation within Glasses …………………………………..… 37
3.3.1 Introduction to Separating Reflection ……………………………… 37
3.3.2 Edge, Corners and Cost Function …………………………………... 39
3.3.3 Oriented Filters ……………………………………………………... 42
3.3.4 Implementation ……………………………………………………... 43

CHAPTER 4 SIMULATION and RESULTS ……………………………….. 46
4.1 Eye Detection ……………..……………………………………………...… 46
4.1.1 Eye Detection on the Image of Bare Face, Face Wearing Glasses, and Face Wearing Light Sunglasses ………………………………….… 46
4.1.2 Eye Detection on the Image of Face Wearing Dark Sunglasses ……66
4.2 Separating Reflection within Glasses ..……………………………………... 68
4.2.1 One Dimensional Reflection Separation ….………………………… 68
4.2.2 Reflection Separation by Discretization …………………………….. 71

CHAPTER 5 CONCLUSION ………………………………………………….. 75

REFERENCES ………………………………………………………………….… 76







List of Figures

Fig. 1.1. Flowchart of eye detection and reflection separation system ……………. 7
Fig. 2.1. Outline of face-segmentation algorithm ………………………………... 9
Fig. 2.2. Original image ………………………………………………………….. 11
Fig. 2.3. Image after filtered by skin-color map in stage A ……………………… 11
Fig. 2.4. Density map after classified to three classes ……………………………. 13
Fig. 2.5. Image produced by stage B …………………..…………………………. 14
Fig. 2.6. Image produced by stage C ……………………………...……………… 15
Fig. 2.7. Image produced by stage D ……………………………………………... 15
Fig. 3.1. Sobel edge detector masks for and , respectively .....……….... 18
Fig. 3.2. Modified Sobel edge detector masks…………………………………... 18
Fig. 3.3. The example of image applying edge detector.………………………….. 19
Fig. 3.4. The example 1 of corner detection ...…………………………….……. 22
Fig. 3.5. The example 2 of corner detection ……………………………….……. 23
Fig. 3.6. The stepping function g(.) ………………..…………………………… 26
Fig. 3.7. Local neighborhood of pixels at a boundary (intensity discontinuity) … 26
Fig. 3.8. The example about anisotropic diffusion processing ………………….. 27
Fig. 3.9. The example of face without glasses ..………………………………... 28
Fig. 3.10. The example of face with glasses …………………………………… 28
Fig. 3.11. The example of face with sunglasses …………………………….…… 28
Fig. 3.12. The example of eye detection on sunglasses ..………………….…… 29
Fig. 3.13. The example of eye location determination on an image of face without
glasses …………………………………………………………………. 32
Fig. 3.14. The example of eye location determination on an image of face wearing
glasses ……………………………………..…………………….…… 34
Fig. 3.15. An example of edge detection on eye region …………………….…… 36
Fig. 3.16. Eye extraction from edge map of Fig. 3.15 ……………………….…… 37
Fig. 3.17. The example of reflection image and it’s decomposition ……….…… 39
Fig. 3.18. An example of an input image and it’s possible decompositions …… 40
Fig. 3.19. The example of a filterbank …………………………………….…… 43
Fig. 3.20. Example of separation results of images with reflections using
discretization ………………………..………………..………….… 45
Fig. 4.1. The example 1 of eye location on the image with bare face ……………. 49
Fig. 4.2. The example 2 of eye location on the image with bare face ……………. 51
Fig. 4.3. The example 3 of eye location on the image with bare face ……………. 53
Fig. 4.4. The example 4 of eye location on the image with bare face ……………. 55
Fig. 4.5. The example 5 of eye location on the image with bare face ………….. 57
Fig. 4.6. The example 1 of eye location on the image of face wearing glasses ….. 59
Fig. 4.7. The example 2 of eye location on the image of face wearing glasses ... 61
Fig. 4.8. The example 3 of eye location on the image of face wearing glasses … 63
Fig. 4.9. The example of eye location on the image of face wearing light
sunglasses……………………………………………………………… 65
Fig. 4.10. The example 1 of location on face wearing dark sunglasses .................. 66
Fig. 4.11. The example 1 of location on face wearing dark sunglasses ………….. 67
Fig. 4.12. Example 1 for testing an one dimension family of solutions ….…….. 69
Fig. 4.13. Example 2 for one dimension family of solutions …………………... 70
Fig. 4.14. Example 1 of separation results of images with reflections using
discretization ………………………………………………………… 71
Fig. 4.15. Example 2 of separation results of images with reflections using
discretization …………………………………………….…………... 72

Fig. 4.16. Example 3 of separation results of images with reflections using
discretization …………………………………………..…………... 73
Fig. 4.17. Example of separation results of images with different reflections using
discretization ………………………………………………………... 74
References

[1] C. H. Chang, “Drowsiness detection using fuzzy integral based information fusion,” Master Thesis, Department of Electrical and Control Engineering, National Chiao Tung University, Taiwan, June 2005.

[2] E. Hjelmas and B. K. Low, “Face detection: a survey,” Computer Vision and Image Understanding, vol. 83, pp. 236–274, 2001.

[3] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, pp. 551–564, 1999.

[4] H. Wu, Q. Chen, and M. Yachida, “Face detection from color images using a fuzzy pattern matching method,” IEEE Trans. Pattern Anal. Machine Intell., vol. 21, pp. 557–563, 1999.

[5] J. Yang and A. Waibel, “A real-time face tracker,” in Proc. 3rd IEEE Workshop on Application of Computer Vision, 1996, pp. 142–147.

[6] R. Féraud, O. J. Bernier, J. E. Viallet, and M. Collobert, “A fast and accurate face detection based on neural network,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, pp. 42–53, 2001.

[7] D. Maio and D. Maltoni, “Real-time face location on gray-scale static images,” Pattern Recognition, vol. 33, pp. 1525–1539, 2000.

[8] C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Trans. Multimedia, vol. 1, pp. 264–27, 1999.

[9] K. K. Sung and T. Poggio, “Example-based learning for view-based human face detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, pp. 39–51, 1998.

[10] K. C. Yow and R. Cipolla, “Feature-based human face detection,” Image and Vision Computing, vol. 15, pp. 713–735, 1997.

[11] S. A. Sirohey and A. Rosenfeld, “Eye detection in a face image using linear and nonlinear filters,” Pattern Recognition, vol. 34, pp. 1367–1391, 2001.

[12] G. C. Feng and P. C. Yuen, “Multi-cues eye detection on gray intensity image,” Pattern Recognition, vol. 34, pp. 1033–1046, 2001.

[13] R. T. Kumar, S. K. Raja, and A. G. Ramakrishnan, “Eye detection using color cues and projection functions,” in Proc. IEEE Int. Conf. Image Processing, 2002, vol. 3, pp. 24–28.

[14] Z. Liu, X. He, J. Zhou, and G. Xiong, “A novel method for eye region detection in gray-level image,” in Proc. IEEE Int. Conf. Communications, Circuits and Systems and West Sino Expositions, 2002, vol. 2, pp. 1118–1121.

[15] S. Kawato and J. Ohya, “ Two-step approach for real-time eye tracking with a new filtering technique,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2000, vol. 2, pp. 1366–1371.

[16] A. Levin and Y. Weiss, “User assisted separation of reflections from a single Image using a sparsity prior,” in Proc. of the European Conference on Computer Vision (ECCV), Prague, May 2004.

[17] M. Irani and S. Peleg, “Image sequence enhancement using multiple motions analysis,” in Proc. IEEE Conf. Comput. Vision Pattern Recog., Champaign, Illinois, 1992, pp. 216–221.

[18] R. Szeliksi, S. Avidan, and P. Anandan, “Layer extraction from multiple images containing reections and transparency,” in Proceedings IEEE CVPR, 2000.

[19] A. Levin, A. Zomet, and Y. Weiss, “Separating reflections from a single image using local features,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2004, Washington DC

[20] D. C. Marr and E. Hildreth, “Theory of edge detection,” Proc. Roy. Soc. London., vol. B 207, pp. 187-217, 1980.

[21] A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene analysis,” IEEE Trans. Comput., vol. C-20, no. 5, pp. 562-569, 1971.

[22] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, no. 6, pp. 679-698, Nov. 1986.

[23] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. 4th Alvey Vision Conference, pp. 147–151, 1988.

[24] M. J. Black, G. Sapiro, D. H. Marimont, and D. Heeger. “Robust anisotropic diffusion,” IEEE Trans. on Image Processing, vol. 7, no. 3, MARCH 1998.

[25] P. Perona and J. Malik, “Scale space and edge detection using anisotropic diffusion,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 12(7):629–639, July 1990.

[26] J. Fan, D. K. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Processing, vol. 10, pp. 1454–1466, 2001.

[27] J. Fan, R. Wang, L. Zhang, D. Xing, and F. Gan, “Image sequence segmentation based on 2-D temporal entropy,” Pattern Recognition Lett., vol. 17, pp. 1101–1107, 1996.

[28] A. Levin, A. Zomet, and Y. Weiss, “Learning to perceive transparency from the statistics of natural scenes,” in S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, 2002.

[29] J. Malik, S. Belongie, T. Leung, and J. Shi, “Contour and texture analysis for image segmentation,” in K.L. Boyer and S. Sarkar, editors, Perceptual Organization for artificial vision systems. Kluwer Academic, 2000.

[30] J.S. Yedidia, W.T. Freeman, and Y. Weiss, “Constructing free energy approximations and generalized belief propagation algorithms,” MERL Technical Report TR2002-35, 2002, available online at http://www.merl.com/papers/TR2002-35/.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top