(3.80.6.131) 您好!臺灣時間:2021/05/17 03:08
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:謝怡竹
研究生(外文):Yi-Jwu Hsieh
論文名稱:以光流為基礎之自動化表情辨識系統
論文名稱(外文):An Optical-Flow-Based Automatic Expression Recognition System
指導教授:蘇木春蘇木春引用關係
指導教授(外文):Mu-Chun Su
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2005
畢業學年度:93
語文別:中文
論文頁數:59
中文關鍵詞:自動化表情辨識臉部動作編碼系統光流追蹤特徵區域人臉偵測
外文關鍵詞:optical flow trackingFACSAutomatic Expression Recognitionfeature regionface detection
相關次數:
  • 被引用被引用:18
  • 點閱點閱:334
  • 評分評分:
  • 下載下載:81
  • 收藏至我的研究室書目清單書目收藏:1
近年來,臉部表情辨識是許多專家學者在研究發展的議題,本論文主要是發展一套自動化的表情辨識系統,可以從影像輸入之後自動化地偵測人臉、擷取特徵到表情辨識。藉由自動人臉偵測、特徵區域的概念與光流追蹤的方法,可以簡單快速地組成一套自動化的表情辨識系統,達成我們的目標。
大部分傳統的表情辨識系統是尋求自動追蹤臉部某些特徵(如眼角、眉尖、嘴角)的方法且以這樣的擷取特徵作為表情辨識的基礎。但是實驗的結果顯示影像品質、光線或其他干擾因素都會造成無法確實擷取臉部特徵,某些影像屬性的影響就足以造成不小的誤差,即使能夠克服也會付出一定的運算時間。雖然清楚的特徵點可以帶來極大的貢獻,但是臉部非特徵點的某些微小肌肉變化也可以讓人感覺出表情的變化,所以我們採用特定特徵區域且平均特徵點的做法,藉由這些特徵點在臉部的移動判別出人臉的表情。
根據這樣的構想,我們在取得一段序列影像之後,以第一張影像做人臉偵測,並依幾何比例關係取得雙眼及嘴巴三個特徵區域,為求特徵區域選取的正確性,我使用了Sobel邊緣偵測、水平投影兩方法,將三個特徵區域的範圍做更精確的選取。特徵區域定義完成之後,將特徵點做平均的分配,三個特徵區域共定義了84個點,接著利用光流(Optical Flow)演算法追蹤之後序列影像中的這84個點,追蹤完畢我們即可取得84個臉部移動向量,因此,基於這84個移動向量就可以進行表情辨識。我們的表情辨識系統包含兩個階段,第一個階段裡,訓練三個多層感知機來辨識三個區域(眉毛、眼睛和嘴巴)的基本動作單元,接著基於上述三個多層感知機的輸出,我們使用五個單層感知機來辨識基本情緒表情。最後我們以實驗來測試本表情辨識系統的效果,而且有不錯的結果。
Recently, researchers have put a lot of efforts on the recognition of facial expressions. The goal of the thesis is to develop an automatic facial expression recognition system to automatically perform human face detection, feature extraction and facial expression recognition after the images are faded. Via the use of the automatic human face detection, the region of facial features and the optical flow tracking algorithm, we can construct an automatic facial expression recognition system to achieve our goal.
Most of the traditional facial expression systems are first to look for a way to automatically track some facial feature points (ex: canthus, eyebrows, and mouth) and then recognize expressions based on these extracted facial features. But experimental results exhibited that the facial features cannot always be obtained reliably because of the quality of images, illumination, and some other disturbing factors. Some properties of images contribute a lot of errors or bias and cost a lot of process time to overcome them if possible. Although the clear features can make a lot of contribution on the performance, we can also feel the changes of facial expression according to some slight muscle variations of facial area. So the way to utilize some specified feature regions and the uniform-distributed feature points is used to for the facial expression from the motion of these feature points.
After a series of images are derived, according to the proposed idea, the first frame is used to perform human face detection, and get the three feature regions (eyes and mouth) by their geometrical ratio relationships. To increase the accuracy of locating feature regions, the Sobel edge detection incorporated with the horizontal projection is used. After three feature regions have been located 84 feature points are uniformly distributed in the specified feature regions. Then we use the optical flow algorithm to track these 84 feature points on the following image series. Therefore, 84 facial motion vectors can be derived from the tracking procedure. Then the facial expression recognition is based on these 84 facial motion vectors. The facial recognition procedures involves in two stages. At the first stage, three multi-layer perceptrons are trained to recognize the action units in the eyebrow, the eye and the mouth regions. Then another five single-layer perceptrons are used to recognize the facial expressions based on the outputs computed from the aforementioned three MLPs. Experiments were conducted to test the performance of the proposed facial recognition system.
摘要 I
誌謝 V
目錄 VI
表目錄 VIII
圖目錄 IX
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 表情辨識之介紹與探討 4
2.1 人臉偵測 4
2.2 特徵擷取 6
2.3 表情分類與辨識 7
2.4 臉部動作編碼系統 8
第三章 系統架構與方法 11
3.1 系統架構與流程 11
3.2 人臉偵測 13
3.2.1 方法介紹 13
3.2.2 樣本結果 17
3.3 特徵區域及特徵點的選取 19
3.3.1 特徵區域選取 19
3.3.2 特徵區域精練 20
3.3.3 特徵點的定位 24
3.4 光流特徵點追蹤 25
3.4.1 方法介紹 26
3.4.2 樣本結果 29
3.5 基本動作單元的產生 31
3.6 情緒的辨識 35
第四章 實驗結果與分析 37
4.1 人臉表情資料庫 37
4.2 動作元辨識結果與比較 38
4.2.1 相關論文辨識結果 38
4.2.2 本論文方法辨識結果與比較 39
4.3 情緒辨識結果與比較 42
4.3.1 相關論文辨識結果 42
4.3.2 本論文方法辨識結果與比較 43
4.4 自拍影片及自動辨識系統結果 47
4.4.1 表情模擬介紹 47
4.4.2 辨識及模擬結果 49
4.5 本論文方法辨識效能評估 51
第五章 結論與未來展望 53
5.1 結論 53
5.2 未來展望 54
參考文獻 55
參考文獻
[1]何明哲,「以模糊推論進行臉部動作元之分析與辨識」,東華大學資訊工程所,2004。
[2]何坤鑫,「以光流為基礎之影像追循」,中山大學機械工程學系研究所,2001。
[3]吳明衛,「自動化臉部表情分析系統」,成功大學資訊工程研究所,2003。
[4]潘奕安,「低解析度影像序列知自動化表情辨識系統」,成功大學資訊工程所,2004。
[5]蘇木春,張孝德,「機器學習:類神經網路、模糊系統以及基因演算法則」,全華,1999。
[6]M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski, “Measuring Facial Expressions by Computer Image Analysis,” Psychophysiology, vol. 36, pp. 253-263, 1999.
[7]J. Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm,” Intel Corporation Microprocessor Research Labs.
[8]J. F. Cohn, A. J. Zlochower, J. J. Lien, and T. Kanade, “Feature-point tracking by optical flow discriminates subtle differences in facial expression,” in Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 396–401, April 1998.
[9]A. J. Colmenarez and T. S. Huang, “Face detection with information based maximum discrimination,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 782-787, 1997.
[10]G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski, “Classifying Facial Actions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, OCT. 1999.
[11]P. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement. San Francisco: Consulting Psychologists Press, 1978.
[12]P. Ekman and W. V. Friesen, Unmasking The Face. New Jersey: Prentice Hall, 1975.
[13]P. Ekman, Emotions revealed : understanding faces and feeling. Weidenfeld and Nicholson, 2003.
[14]P. Ekman, J. Hager, C. H. Methvin, and W. Irwin, “Ekman-Hager Facial Action Exemplars,” unpublished data, Human Interaction Laboratory, Univ. of California, San Francisco.
[15]R. F´eraud, O. J. Bernier, J. E. Viallet, and M. Collobert, “A Fast and Accurate Face Detection Based on Neural Network,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 1, pp. 42-53, Jan. 2001.
[16]Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol.55, pp. 119-139, 1997.
[17]C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Transactions on Multimedia, vol. MM-1, no. 3, pp. 264-277, Sept. 1999.
[18]R. C. Gonazlez and R. E. woods, Digital image processing. 2nd. Addison-wesley, 1992.
[19]S. Hadi, A. Ali, and K. Sohrab, “Recognition of six basic facial expressions by feature-points tracking using RBF neural network and fuzzy inference system,” In Proceedings of IEEE International Multimedia Conference and Expo, pp. 1219-1222, 2004.
[20]R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, "Face Detection in Color Images," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696-706, 2002.
[21]T. Kanade and J. F. Cohn, Automatic Face Analysis http://www-2.cs.cmu.edu/~face/
[22]T. Kanade, J. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” in Proceedings of the Fourth IEEE Intelnational Conference on Automatic Face and Gesture Recognition,. Grenoble, France. 2000.
[23]L. L. Kontsevich and C. W. Tyler, “What makes Mona Lisa smile?” Vision Research, vol. 44, Issue: 13, pp. 1493-1498, June, 2004.
[24]J. Lien, T. Kanade, J. Cohn, and C. Li, “Detection, tracking, and classification of action units in facial expression,” Robotics and Autonomous Systems, vol. 31, Issue: 3, pp. 131-146, May 2000.
[25]R. Lienhart and J. Maydt, “An Extended Set of Haar-like Features for Rapid Object Detection,” in Proceedings of the IEEE International Conference on Image Processing, vol. 1, pp. 900-903, Sep. 2002.
[26]B. D. Lucas and T. Kanade. “An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, pp. 565-593, 1986.
[27]M. J. Lyons, J. Budynek, and S. Akamatsu, “Automatic Classification of Single Facial Images.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 12, Dec. 1999.
[28]L. Ma and K. Khorasani “Facial Expression Recognition Using Constructive Feedforward Neural Networks,” IEEE Transaction on Systems, Man, and Cybernetics, vol. 34, no. 3, pp. 1588-1595, 2004.
[29]D. Maio and D. Maltoni, “Real-time Face Location on Gray-scale Static Images,” Pattern Recognition, vol. 33, no. 9, pp. 1525-1539, Sept. 2000.
[30]M. Pantie and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, 2000.
[31]R. W. Picard, Affective Computing, London: The MIT Press, 1997.
[32]H. A. Rowley, S. Baluja, and T. Kanade, “Neural Network-Based Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[33]H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 38-44, 1998.
[34]K. K. Sung and T. Poggio, “Example-Based Learning for View-Based Human Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-51, Jan. 1998.
[35]Y. l. Tian, T. Kanade, and J. F. Cohn, “Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity,” in Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 229-234, May. 2002.
[36]Y. l. Tian, T. Kanade, and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, Feb. 2001.
[37]P. Viola and M. J. Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features," in Proceedings of the IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, Dec. 2001.
[38]H. Wu, Q. Chen, and M. Yachida, “Face Detection From Color Images Using a Fuzzy Pattern Matching Method,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 557-563, June 1999.
[39]Y. Yacoob and L. D. Davis “Recognizing human facial expressions from long image sequences using optical flow.” IEEE Trans.on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642, 1996.
[40]M. H. Yang, D. Kriegman, and N. Ahuja, “Detecting Faces in Images: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, Jaun.2002.
[41]M. H. Yang and N. Ahuja, “Detecting Human Faces in Color Images,” in Proceedings of the IEEE International Conference on Image Processing, pp. 127-139, Oct. 1998.
[42]K. C. Yow and R. Cipolla, “Feature-based Human Face Detection,” Image and Vision Computing, vol. 15, no. 9, pp. 713-735, Sept. 1997.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊