跳到主要內容

臺灣博碩士論文加值系統

(44.200.86.95) 您好!臺灣時間:2024/05/30 02:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:湯家瑋
研究生(外文):Chia-Wei Tang
論文名稱:植基於粒子群最佳化演算法以訓練神經網路於表情辨識之研究
論文名稱(外文):A Research of Training Neural Network Based on Particle Swarm Optimization Algorithm for Facial Expression Recognition
指導教授:潘正祥周定
指導教授(外文):Jeng-Shyang PanDing Chou
學位類別:碩士
校院名稱:國立高雄應用科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:中文
論文頁數:70
中文關鍵詞:臉部表情辨識類神經網路粒子群最佳化演算法
外文關鍵詞:Face Expression RecognitionNeural NetworkParticle Swarm Optimization Algorithm
相關次數:
  • 被引用被引用:0
  • 點閱點閱:702
  • 評分評分:
  • 下載下載:14
  • 收藏至我的研究室書目清單書目收藏:1
臉上的表情在日常生活中扮演很重要的角色,近年來在人機互動中,臉部表情辨識已成為很重要的議題,不僅是在人與電腦的互動上或是人與機器人的互動上,皆是人機互動的應用。在日常生活中臉部表情經常蘊含著人與人間溝通的重要訊息,因此,表情辨識便成為許多研究者極欲突破的課題。本系統的設計主要分為臉部偵測、特徵擷取及表情辨識三部份。在臉部偵測部分,我們利用膚色檢測的方法找出臉部區域後,再利用人臉幾何比例關係分別找出眉毛、嘴巴、眼睛區塊。在特徵擷取部分,可將影像分為兩個方式進行處理,其一是將區塊中的影像透過sobel邊緣檢測的方式找出明顯的輪廓,並將此影像轉為二值影像,其二則是將區塊中的影像進行灰階值排序並亦轉為二值影像,接下來對兩個二值影像進行交集運算,分別在眼睛、眉毛及嘴巴上標出特徵點並求出特徵資訊。在表情辨識部分,我們嘗試使用類神經網路,並以倒傳遞式學習及粒子群最佳化演算法來訓練網路之權重,以期能達到最佳化。欲辨識的表情有快樂、傷心、驚訝、生氣、恐懼、厭惡及無表情。本研究所使用的資料庫為 JAFFE 表情資料庫,由實驗結果顯示,經由本文提出的方法可讓辨識率達到89 %。
Facial expression plays a critical role in our daily life. In the field of the interaction of Human-Machine Interface, the facial expression recognition system has become an important issue in recent years. The interface, not only of the Human-Computer but also of the Human-Robot are important applications of Human-Machine Interaction.Facial expression usually implies important messages in communications among people in our daily life. Therefore, facial recognition has become a topic in which a large number of researchers are interested. The design of our system consists of three major parts: Face Detection, Feature Extraction, and Expression Recognition.In the stage of Face Detection, after picking out the facial area by using skin color detection, we further recognize the area of eyebrows, mouth, and eyes by utilizing facial geometry proportion. In the stage of Feature Extraction, we process the image by next two way: First, identify the distinct profile via Sobel Edge Detection from the image in the area, and then transfer this image into a binary form; Second, perform gray value sort for the image in the area and also transfer it into a binary form. After these two ways above are done, we perform AND operation for these two binary images, then mark the feature points on eyebrows, mouth, and eyes respectively, finally, we calculate the feature information according to the distance between specific points. For the part of the Expression Recognition, we try to utilize Neural Network and train the weights of network via Back-Propagation Learning and PSO to reach optimization. To distinguish the expression of happy, sad, surprise, anger, fear, disgust, and natural, we applied Japanese Female Facial Expression (JAFFE) database in our study. The experimental results exhibit that our proposed approach could reach a recognized ratio of 89 %.
目 錄
摘 要 V
ABSTRACT VI
誌 謝 VIII
目 錄 IX
表目錄 XI
圖目錄 XII

第一章 緒論 1
1.1 研究目的與動機 1
1.2 表情辨識系統概述 2
1.3 論文架構 4

第二章 臉部表情辨識之相關研究與探討 5
2.1 臉部偵測 6
2.2 臉部特徵擷取 12
2.2.1 以靜態影像作特徵擷取 12
2.2.2 以動態影像作特徵擷取 15
2.3 臉部表情分類與辨識 18
2.3.1 以臉部動作編碼系統為基礎 19
2.3.2 以類神經網路為基礎 21
2.3.3 以規則為基礎 22
2.3.4 以支援向量機為基礎 22

第三章 系統架構與方法 24
3.1 臉部偵測 25
3.1.1 膚色像素資訊 26
3.1.2 影像形態學處理 28
3.1.3 標記連通成分與最大區域選取 33
3.1.4 影像投影與幾何比例關係 34
3.2 臉部特徵擷取 36
3.2.1 邊緣檢測 38
3.2.2 眼睛特徵點標示 39
3.2.3 眉毛特徵點標示 42
3.2.4 嘴巴特徵點標示 45
3.3 臉部表情辨識 48
3.3.1 特徵距離與特徵值設定 49
3.3.2 特徵向量設定 52
3.3.3 類神經網路架構 52
3.3.4 以 PSO 訓練類神經網路 54

第四章 實驗結果與分析 61
4.1 資料庫 61
4.2 實驗結果與分析 63

第五章 結論與未來展望 66
5.1 研究結論 66
5.2 未來展望 67

參考文獻 68
[1]楊淳凱, 基於自我組織特徵映射圖之人臉表情辨識, 國立中央大學資訊工程系碩士論文, 2008.
[2]吳明衛, 自動化臉部表情分析系統, 國立成功大學資訊工程系碩士論文, 2003.
[3]繆紹綱, 數位影像處理 ─ 運用 MATLAB ,東華書局, 2003.
[4]P. Wanga, F. Barrettb, E. Martin, M. Milonova, R. E. Gur, R. C. Gur, C. Kohler, and R. Verma, “Automated video-based facial expression analysis of neuropsychiatric disorders,” Neuroscience Methods, vol. 168, pp. 224-238, Feb. 2008.
[5]C. C. Chiang, W. K. Tai, M. T. Yang, Y. T. Huang, and C. J. Huang, “A novel method for detecting lips, eyes and faces in real time,” Real-Time Imaging, vol. 9, no. 4, pp. 277-287, Aug. 2008.
[6]C. Zhan, W. Li, F. Safaei, and P. Ogunbona, “Emotional states control for on-line game avatars,” in Proc. of the 6th ACM SIGCOMM workshop on Network and system support for games, pp. 31-36, 2007.
[7]W. Sun and Q. Ruan, “Two-dimension PCA for facial expression recognition,” in Proc. 8th Int. Conf. on Signal Processing, vol. 3, Nov. 2006.
[8]Gengtao Zhou, Yongzho Zhan, and Jianming Zhang, “Facial Expression Recognition Based on Selective Feature Extraction,” Intelligent Systems Design and Applications, vol. 2, pp. 412-417, 2006.
[9]H.S. Kim, S.D. Cha, “Empirical evaluation of SVM-based masquerade detection using UNIX commands,” Computers & Security, vol. 24, 2005
[10]J. Yang, D. Zhang, A. F. Frangi, and J.Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 1,pp. 131-137, 2004.
[11]L. Ma and K. Khorasani, “Facial Expression Recognition Using Constructive Feedforward Neural Networks,” IEEE Transaction on Systems, Man, and Cybernetics, vol. 34, no. 3, pp. 1588-1595, 2004.
[12]B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognition, vol. 36, pp. 259-275, Sep. 2003.
[13]M. H. Yang, D.J. Kriegman, and N. Ahuja, “Detecting faces in images: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no. 1, pp. 34-58, Jan. 2002.
[14]R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detection in color images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no.5, pp. 696-706, 2002.
[15]Y. Araki, N. Shimada, and Y. Shiral, “Detection of faces of various directions in complex backgrounds,” in Proc. of the 16th IEEE Int. Conf. on Pattern Recognition, Washington, vol. 1, pp. 409-412, Aug. 2002.
[16]C.Garcia,G.simandiris,G.tzirita,“A feature-based face detector using wavelet frames,”Proc of Intern , workshop on very low bit coding, pp. 71-76, Oct. 2001.
[17]T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, 2001.
[18]P. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, Dec. 2001.
[19]Soriano M., S. Huovinen, B. Martinkauppi, and M. Laaksonen, “Using the Skin Locus to Cope with Changing Illumination Conditions in Color-Based Face Tracking,” Proceedings of the IEEE Nordic Signal Processing Symposium, pp. 383-386, 2000.
[20]Garcia C., G. Zikos, and G. Tziritas, “Face Detection in Color Images using Wavelet Packet Analysis, Multimedia Computing and Systems,“ IEEE International Conference, vol. 1, pp. 703 - 708, 1999.
[21]Gracia C., and G. Tziritas, “Face Detection Using Quantized Skin Color Regions Merging and Wavelet Packet Analysis,“ IEEE Transactions on Multimedia, vol 1, no. 3, 1999.
[22]H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[23]H. A. Rowley, S. Baluja, and T. Kanade, “Rotation invariant neural network-based face detection,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 38-44, 1998.
[24]E.Osuna, R.Freund, and F.Girosi, “Training support vector machines : an application to facedetection” , Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 130-136, 1997.
[25]K. C. Yow and R. Cipolla, “Feature-based human face detection,” Image and Vision Computing, vol. 15, no. 9, pp. 713-735, 1997.
[26]Y. Yacoob and L. D. Davis, “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642, 1996.
[27]T. Kohonen, E. Oja, O. Simula, A. Visa, and T.Kangas,“Engineering application of the self-organizing map,” in Proc. of the IEEE, vol. 84, no. 10, pp. 1358-1383, 1996.
[28]A. Lanitis, C. J. Taylor, and T.F. Cootes, “An automatic face identification system using flexible appearance models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995.
[29]J. Kennedy, R. Eberhart, “Particle Swarm Optimization,” Proc. of IEEE international Conference on Neural Networks (ICNN), vol. IV, pp. 1942-1948, 1995.
[30]G. Yang and T. S. Huang, “Human face detection in complex background,” Pattern Recognition, vol. 27, no. 1, pp. 53-63, 1994.
[31]P. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement. San Francisco: Consulting Psychologists Press, 1978.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top