(18.204.227.34) 您好!臺灣時間:2021/05/19 06:47
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王維鈞
研究生(外文):Wei-Jun Wang
論文名稱:應用改良式霍氏轉換與眼眉區塊搜尋於人臉偵測
論文名稱(外文):Face Detection with Modified Ellipse Hough Transform and Eye-Brow Block Searching
指導教授:周瑞仁周瑞仁引用關係
指導教授(外文):Jui-Jen Chou
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:生物產業機電工程學研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2004
畢業學年度:92
語文別:中文
論文頁數:98
中文關鍵詞:橢圓偵測人臉偵測霍氏轉換人眼偵測
外文關鍵詞:ellipse detectionHough transformface detectioneyes detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:111
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本研究基於橢圓度與眼眉特徵,提出一改良式橢圓霍氏轉換(MEHT) 與強健雜訊濾波及眼眉區塊搜尋法(RNFEBS)進行人臉偵測。對於靜態灰階影像中複雜背景的人臉偵測,最有用的資訊即為人臉之橢圓特徵,而在人臉中,最明顯且可靠的特徵便是眼睛。首先利用改良式橢圓霍氏轉換將人臉大約的位置求出,再應用強健雜訊濾波及眼眉區塊搜尋法,在該位置附近一定範圍內,以一種逐步搜尋處理方法,最後求得眼睛所在之位置,而人臉的位置及尺寸也可隨之確定。
有別於傳統霍氏轉換累加器的設計,MEHT提出的兩種改良式累加器的設計,係對偵測對象同時考慮較重要的模板橢圓度及次重要之對比度,調整累加器的參數KM、KA可抑制對比度對累積值的影響力,達到對比度對累加值的貢獻小於模板橢圓度之目的。且針對雜訊邊限的條件給定參數KM、KA值之範圍,故對於複雜背景的橢圓偵測十分強健。在RNFEBS中,則依眼睛的幾何特徵及分布,依序求得眼眉區塊、眼睛及人臉的位置,所發展的方法具有高效率及準確性。
This study proposes a modified ellipse Hough transform(MEHT) as well as robust noise filtering and eye-brow block searching(RNFEBS) for face detection based on ellipticity, contrast and geometrical features of objects in an image. As for the face detection in a static gray level image with complex background, the most valuable information is the elliptical shapes of faces and the geometrical features of eyes. With MEHT, the position of the face center is easily estimated. And then, RNFEBS is applied afterwards to search for eye-brow block and locate eyes pair. The location and size of face could be decided accordingly.
Different from the accumulator design of traditional Hough transform, the MEHT developing two types of accumulator which are related to both ellipticity and contrast of objects in an image. The factor KM and KA in the accumulators could be turned to suppress the influence of contrast on the accumulator and achieve the lower contribution than ellipticity. MEHT is very robust for the detection of nonstandard ellipse with complex background in an image if the factor KM and KA is carefully chosen based on noise margin. The positions of eye-brow block, eyes pair and face could be obtained accordingly based on eyes’ geometrical features by RNFEBS. This approach developed in the study is proved to be high efficient and accurate.
中文摘要………………………….…………………………………….…….i
英文摘要……………………………………………………………………..ii
目錄……………………………………...…………………………………..iii
圖目錄………………………………………………………………………..v
第一章 前言 ………………………………………………………………1
第二章 文獻探討 …………………………………………………………5
2.1 影像分析………………...………………………………………... 5
2.2 霍式轉換…………………………………………………………...6
2.2.1 標準霍式轉換…………….……….……..…………………...7
2.2.2 圓霍氏轉換……………….……….………………………….9
2.2.3 橢圓霍氏轉換…………….……….………………………...10
2.2.4 高維度之霍式轉換….…...……….…………………………11
2.2.5 廣義霍氏轉換…………….……….………………………...12
2.3 人臉偵測……………...…………………..………….…………...16
2.3.1 利用個別的臉部特徵來偵測人臉……...…...…….………..19
2.3.2 利用整體的臉部特徵來偵測人臉……..………….………..23
第三章 研究理論與方法………………………….………………………25
3.1 改良式橢圓霍氏轉換……..……….……………………………..26
3.1.1 改良氏霍氏轉換之前處理-切線向量……...……………….28
3.1.1.1 曲面之近似法向量……………….…..………………28
3.1.1.2 切線向量之角度影像…………….…..………………31
3.1.1.3 切線向量之強度影像…………….…..……………... 32
3.1.2 改良氏霍氏轉換之累加器設計….……….…..…………….33
3.1.3 根據雜訊邊限決定KM、KA值………….…….……………36

3.2 型態學運算處理……….………...…………..…………………...43
3.3 強健雜訊濾波與眼眉區塊搜尋….…………..…………………..44
3.3.1 以直方圖6%顯著水準決定二元化門檻值…………..……..45
3.3.2 根據物件之幾何特徵執行雜訊濾波……..…………..…….47
3.3.2.1 臉部中心位置修正……………………..…………… 48
3.3.2.2 不符合人眼特徵的物件之刪除……...………………50
3.3.3 以直方圖分析執行雜訊濾波………..……………..……….53
3.3.3.1 水平方向直方圖分析……..…….……………..……..54
3.3.3.2 垂直方向直方圖分析…...….….……………………..56
3.3.4 於眼眉區塊內決定眼睛的位置………..…………………...57
3.3.5 人眼位置之驗證及修正………………..…………………...60
3.3.6 人臉的範圍 …………………………..…………………….62
第四章 實驗結果與討論…………………………………………………..65
4.1 改良式霍氏轉換的實驗結果…………………………………….65
4.1.1 KM、KA值對內外部累積值的影響………....………………65
4.1.2對比度與橢圓度對MEHT與EHT之內部累加值的影響...66
4.1.3 在複雜背景中的人臉位置之偵測……………………….....74
4.2型態運算之實驗結果……...…..……………………..…………....75
4.3 強健雜訊濾波與眼眉區塊蒐尋的實驗結果……………………..75
4.3.1以直方圖6%顯著水準決定二元化門檻值之實驗結果....…77
4.3.2 利用眼睛的幾何特徵來執行雜訊濾波的實驗結果…….…79
4.3.3 利用統計圖分析得到眼眉區域之實驗結果………….…....81
4.3.4 從眼眉區域決定眼睛的位置和驗證及修正工作…...…......84
4.4 其他整體實驗結果的展示及決定人臉範圍……………………..86
第五章 結論與建議………………………………………………………93
5.1 結論……………………………………………………………93
5.2 建議……………………………………………………………94
參考文獻…………………………………………………………………95
1. Ballard, D. H., 1981, Generalizing the Hough transform to detectarbitrary shapes. Pattern Recognition 13(2): 111-122.

2. Chow, G. and X. Li, 1993, Towards a system for automatic facial feature detection, Patter Recognition 26(12): 1739-1755

3. Davies, E. R., 1990, Machine Vision, Academic Press Limited, London.

4. Decarlo Douglas, Metaxas Dimitris, July 2000, Optical Flow Constraints on Deformable Models with Applications to Face Tracking,International Journal of Computer Vision 38 (2): 99-127.

5. Donahue, M. J. and S. I. Rokhlin, 1993, On the use of Level Curves in Image Analysis, ImageUnderstanding, 57: 185-203.

6. Duda, R. O. and P. E. Hart, 1972, Use of the Hough transform to detect lines and curves in picture, Commun. ACM, 15(1): 11-15

7. Gonzalez, R.C. and, R.E. Woods, 2002, Digital Image Processing, Prentice Hall.

8. Han, C.-C., M.-C. Chen, H.-R. Tyan, and H.-Y. Mark Liao, 1997, Fast Face Detection via Morphology-based Pre-processing, Images and Recognition, Vol.4, No.1, pp.15-22.

9. Hough, P. V. C, Dec. 18, 1962, Method and means for recognizing complex patterns, U.S. Patent 3,069,654.

10. Hsu, R.-L., M. Abdel-Mottaleb, and A. K. Jain, May 2002, Face detection in color images, IEEE Trans. Pattern Analysis and Machine Intelligence, 24 (5): 696-706

11. Huang, J., S. Gutta, and H. Wechsler, 1996, Detection of human faces using decision trees, in “IEEE Proc. of 2nd Int. Conf. on Automatic Face and Gesture Recognition”, Vermont.

12. Hunke, M. and A. Waibel, 1994, Face locating and tracking for human-computer interaction, in “28th Asilomar Conference on Signals, Systems and Computers”, Monterey, CA.

13. Kass, M., A. Witkin, and D. Terzopoulos, 1987, Snakes: active contour models, in “Proc. of 1st Int Conf. On Computer Vision”, London.

14. Keishi, Hanahara, Tsugito Maruyama, Takashi Uchiyama, 1988, A Real-Time Processor for the Hough Transform. IEEE Transactions on Pattern Analysis and Machine Intelligence 10(1): 121-125.

15. Lam, K. M. and H. Yan, Nov. 1994, Facial feature location and extraction for computerised human face recognition, in Int. Symposium on information Theory and Its Applications, Sydney, Australia.

16. Leavers, V. F., 1992, Shape Detection in Computer Vision Using the Hough Transform, Springer-Verlag London Limited.

17. Lin, S.-H., S.-Y. Kung, and L.-J. Lin, 1997, Face recognition/detection by probabilistic decision-based neural network, IEEE Trans. Neural Networks 8: 114–132.

18. Li, X. and N. Roeder,1995, Face contour extraction from front-view images, Pattern Recognition, 28: 1167-1179.

19. Low, B. K. and M. K. Ibrahim, 1997, A fast and accurate algorithm for facial feature segmentation, in “Proceedings International Conference on Image Processing”.

20. Luthon, F. and M. Lievin, 1997, Lip motion automatic detection, in “Scandinavian Conference on Image Analysis”, Lappeenranta, Finland.

21. Maio, D. and D. Maltoni, 2000, Real-time face location on gray-scale static images, Pattern Recog. 33: 1525–1539.

22. Marr, D. and E. Hildreth, 1980, Theory of edge detection, in Proc. of the Royal Society of London.

23. McKenna, S., S. Gong, and J. J. Collins, Sept. 1996, Face tracking and pose representation, in “British Machine Vision Conference”, Edinburgh, Scotland.

24. Miao, Jun, Hong Liu, Wen Gao, Hongming Zhang, Gang Deng, Xilin Chen, 2003, A System for Human Face and Facial Feature Location. Int. J. Image Graphics 3(3): 461-480.

25. Moghaddam, B. and A. Pentland,1995, Maximum likelihood detection of faces and hands, proc. Int. Work. on Automatic Face and Gesture Recognition:122-128.

26. Moghaddam, B. and A. Pentland, 1997, Probabilistic visual learning for object representation, IEEE Trans. Pattern Anal. Mach. Intell. 19.

27. Nesi, P. and R. Magnol., 1996, Tracking and synthesizing facial motions with. Dynamic Contour Real Time Imaging, 2:67-79.

28. Nikolaidis, A. and I. Pitas, 2000, Facial feature extraction and pose determination, Pattern Recog. 33: 1783–1791.

29. Pao, D., H.F. Li, R. Jayakumar, 1993, A decomposable parameter space for the detection of ellipses, Pattern Recognition Letters, 14: 951-958.

30. Reinders, M. J. T., P. J. L. van Beek, B. Sankur, and J. C. A. van der Lubbe, 1995, Facial feature localization and adaptation of a generic face model for model-based coding, in Signal Processing: Image Communication: 57–74.

31. Rowley, H. A., S. Baluja, and T. Kanade, January 1998, Neural network-based face detection, IEEE Trans. Pattern Anal. Mach. Intell. 20: 23–38.

32. Sakai, T., M. Nagao, and T. Kanade, 1972, Computer analysis and classification of photographs of human faces, in Proc. First USA—Japan Computer Conference: 2-7.

33. Schubert, A., 2000, Detection and tracking of facial features in real time using a synergistic approach of spatiotemporal models and generalized hough-transform techniques, in “Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition”.

34. Sirovich, L. and M. Kirby, 1987, Low-dimensional procedure for the characterization of human faces, J. Opt. Soc. Amer. 4: 519–524.

35. Tsuji, S. and F. Matsumoto, 1978, Detection of ellipses by modified
Hough transformation, IEEE transaction on Computers, 27: 777-781.

36. Wachs, Juan, Helman Stern, Mark Last, 2002, Color Face Segmentation Using a Fuzzy Min-Max Neural Network. Int. J. Image Graphics 2(4): 587-602.

37. Wechsler, H. and J. Sklansky, 1977, Automatic detection of ribs in chest radiographs, Pattern Recognition, 9: 21-30.

38. Wu, W., M.J. Wang, 1993, Elliptical object detection by using its geometrical properties, Pattern Recognition, 26: 1499-1509.

39. Yang, G. and T. S. Huang, 1994, Human face detection in a complex background, Pattern Recog. 27: 53–63.

40. Yang, J. and A. Waibel, 1996, A real-time face tracker, in IEEE Proc. of the 3rd Workshop on Applications of Computer Vision, Florida.

41. Yip, R.K.K., P.K.S. Tam, D.N.K. Leung, 1992, Modification of the Hough transform for circles an ellipses detection using a 2-dimensional Array, Pattern Recognition, 25: 1007-1022.

42. Yokoyama, T., Y. Yagi, and M. Yachida, 1998, Facial contour extraction model, in “IEEE Proc. of 3rd Int. Conf. On Automatic Face and Gesture Recognition”.

43. Yuille, A. L., P. W. Hallinan, and D. S. Cohen, 1992, Feature extraction from faces using deformable templates, Int. J. Comput. Vision 8: 99–111.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top