(18.210.12.229) 您好!臺灣時間:2021/03/05 12:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:唐秋月
研究生(外文):TANG, CHIU-YUEH
論文名稱:利用卷積神經網路實現臉部情緒辨識
論文名稱(外文):Using a Convolutional Neural Network for Facial Emotion Recognition
指導教授:林學儀林學儀引用關係林正堅林正堅引用關係
指導教授(外文):LIN, HSUEH-YILIN, CHENG-JIAN
口試委員:潘欣泰郭世祟
口試委員(外文):PAN, HSIN-TAIKUO, SHIH-SUI
口試日期:2020-07-10
學位類別:碩士
校院名稱:國立勤益科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:35
中文關鍵詞:卷積神經網路表情臉部辨識深度學習
外文關鍵詞:Convolutional neural networksfacial recognitiondeep learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:44
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在網路蓬勃發展下,人臉辨識技術被廣泛地應用在各種實務、商務、娛樂系統上,像是門禁系統、 監控系統、智慧型裝置等。因此,臉部情緒在人類的認知行為中是不可或缺的。而人臉辨識常常受到許多因素影響,如:光照環境的不同、表情的不同、臉部旋轉等情形。而且在傳統分類中,如:神經網路、K-近鄰算法、支撐向量機等,過去為先定義特徵值再輸入給予分類器做分類進而判讀,若不同人定義特徵值,將會導致不同的結果。故我們提出了使用卷積神經網路(Convolutional Neural Network,簡稱CNN)來辨識人臉情緒辨識。
而在本研究中,運用兩個資料庫分別為Multi-PIE和CK +數據庫,對臉部情緒進行分類,驗證完結果可以發現到在Multi-PIE數據庫中,運用Adagrad、Adam、SGD三種不同優化方法進行均值,實驗結果可以得知LeNet架構辨識結果是以SGD均值為99.79%最高,故從本實驗結果可以發現到SGD為最好之優化方法;而在CK +數據庫中,亦是運用Adagrad、Adam、SGD三種不同優化方法,故亦可以得知在LeNet架構辨識結果以Adagrad為最好之優化方法,其中平均準確率高達97.69%。
With the rapid development of the Network, face recognition technology is widely used in various practical, business, and entertainment systems, such as access control systems, surveillance systems, and smart devices. Therefore, facial emotions are indispensable in human cognitive behavior. Face recognition is affected by many factors, such as different lighting environments, different expressions, and face rotation. In traditional classification, for example: neural networks, K-nearest neighbor algorithms, support vector machines, etc., in the past to define the feature value first and then input it to the classifier for classification and interpretation, and different people define the feature value will lead to different results. Therefore, we propose to use Convolutional Neural Network (CNN) to recognize facial emotion recognition.
In this study, the two databases are Multi-PIE and CK+ databases to classify facial emotions. After verification, the results found in the Multi-PIE database, using three different optimizations of Adagrad, Adam and SGD the method is averaged, and the study results can be found that the LeNet architecture identification result is the highest SGD average value of 99.79%. Therefore, from this study result, it found that SGD is the best optimization method; and in the CK + database, three different optimization methods of Adagrad, Adam, and SGD be also used, so it can also be known that the identification result in LeNet architecture is Adagrad. A good optimization method, in which the average accuracy rate is as high as 97.69%.
摘要 I
Abstract II
致謝 IV
目錄 V
圖目錄 VI
表目錄 VII
第一章 緒論 1
1.1 研究動機 1
1.2 論文架構 4
第二章 文獻探討 5
2.1 傳統之辨識方法 5
2.2 卷積神經網路 7
2.3 臉部情緒辨識 10
第三章 研究方法 12
3.1 輸入層(Input Layer) 12
3.2 卷積層(Convolution Layer) 12
3.3 激活函數(Activation Function) 13
3.4 池化層(Pooling Layer) 17
3.5 全連接層(Fully Connected Layer) 19
第四章 實驗結果 24
4.1 Multi-PIE 臉部數據庫 24
4.2 CK +數據庫 28
第五章 結論與建議 31
參考文獻 32
[1]A. Kumar, J. Kim, D. Lyndon, M. Fulham, & D. Feng, “An ensemble of fine-tuned convolutional neural networks for medical image classification,” IEEE Journal of Biomedical and Health Informatics, Vol. 21, No. 1, pp. 31-40, 2016.
[2]I. Sevo, & A. Avramovic, “Convolutional neural network based automatic object detection on aerial images,” IEEE Geoscience and Remote Sensing Letters, Vol. 13, No. 5, pp. 740-744, 2016.
[3]J. M. Ponce, A. Aquino, & J. M. Andujar, “Olive-Fruit Variety Classification by Means of Image Processing and Convolutional Neural Networks,” IEEE Access, Vol. 7, pp. 147629-147641, 2019.
[4]C. Ding, & D. Tao, “Trunk-branch ensemble convolutional neural networks for video-based face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 4, pp. 1002-1014, 2017.
[5]P. Rasti, T. Uiboupin, S. Escalera, & G. Anbarjafari, “Convolutional neural network super resolution for face recognition in surveillance monitoring,” In International Conference on Articulated Motion and Deformable Objects, pp. 175-184, 2016.
[6]G. Levi, & T. Hassner, “Age and gender classification using convolutional neural networks,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34-42, 2015.
[7]B. Olstad, & A. H. Torp, “Encoding of a priori information in active contour models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 18, No. 9, pp. 863-872, 1996.
[8]M. Turk, & A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, Vol. 3, No. 1, pp. 71-86, 1991.
[9]L. Wiskott, N. Kruger, N. Kuiger, & C. Malsburg, “Face recognition by elastic bunch graph matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 775-779, 1997.
[10]H. Othman, & T. Aboulnasr, “A separable low complexity 2D HMM with application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 10, pp. 1229-1238, 2003.
[11]W. Wu, “A novel solution to test face recognition methods on the training data set,” International Journal of Signal Processing, Image Processing and Pattern Recognition, Vol. 8, No. 9, pp. 21-30, 2015.
[12]T. Cover, & P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, Vol. 13, No. 1, pp. 21-27, 1967.
[13]V. N. Vapnik, “The Nature of Statistical Learning Theory,” NY: Springer2V erlag, 1995.
[14]O. Chapelle, V. Vapnik, O. Bousquet, & S. Mukherjee, “Choosing multiple parameters for support vector machines,” Machine Larnin, Vol. 46, No. 1-3, pp. 131-159, 2002.
[15]V. Vapnik, “Statistical Learning Theory,” NY: John Willey & Sons, 1998.
[16]V. Vapnik, “The nature of statistical learning theory,” Springer science & business media, 2013.
[17]蘇妍如、吳東光 & 孟瑛如應用決策樹於學習障礙鑑定之評估。 Journal of Information Technology and Applications (資訊科技與應用期刊) , Vol. 2, No. 2, pp. 107-115, 2007.
[18]林宗勳,Support Vector Machines 簡介。
[19]O. Chapelle, P. Haffner, & V. N. Vapnik, “Support vector machines for histogram-based image classification,” IEEE Transactions on Neural Networks, Vol. 10, No. 5, pp. 1055-1064, 1999.
[20]K. Zhang, J. Deng, & W. LU, “Segmenting human knee cartilage automatically from multi-contrast MR images using support vector machines and discriminative random fields,” In: 2011 18th IEEE International Conference on Image Processing, pp. 721-724, 2011.
[21]Y. LeCun, L. Bottou, Y. Bengio, & P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, Vol. 86, No. 11, pp. 2278-2324, 1998.
[22]K. Simonyan, & A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ArXiv preprint arXiv, pp. 1409-1556, 2014.
[23]A. Krizhevsky, I. Sutskever, & G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” In Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
[24]Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, & X. Bai, “Multi-oriented text detection with fully convolutional networks,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4159-4167, 2016.
[25]G. Levi, & T. Hassner, “Age and gender classification using convolutional neural networks,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34-42, 2015.
[26]M. Lin, Q. Chen, & S. Yan, “Network in network,” in ICLR Conference Submission, pp. 1–10, 2013.
[27]D. H. Hubel, & T. N. Wiesel, “Receptive fields of single neurones in the cat's striate cortex,” The Journal of Physiology, Vol. 148, No. 3, pp. 574-591, 1959.
[28]I. Sevo, & A. Avramovic, “Convolutional neural network based automatic object detection on aerial images,” IEEE Geoscience and Remote Sensing Letters, Vol. 13, No. 5, pp. 740-744, 2016.
[29]M. F. Green, D. L. Penn, R. Bentall, W. T. Carpenter, W. Gaebel, R. C. Gur, R. Heinssen, “Social cognition in schizophrenia: An NIMH workshop on definitions, assessment, and research opportunities,” Schizophrenia Bulletin, Vol. 34, pp. 1211-1220, 2008.
[30]R. V. Behere, “Facial emotion recognition deficits: The new face of schizophrenia,”Indian Journal of Psychiatry, Vol. 57, No. 3, pp. 229, 2015.
[31]P. Ekman, & W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, Vol. 17, No. 2, pp. 124, 1971.
[32]P. Ekman, & D. Cordaro, “What is meant by calling emotions basic,” Emotion Review, Vol. 3, No. 4, pp. 364-370, 2011.
電子全文 電子全文(網際網路公開日期:20250819)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔