跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.41) 您好!臺灣時間:2026/01/13 08:58
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林昭宇
研究生(外文):Lin, Chao-Yu
論文名稱:基於時間強化設計之情緒辨識方法
論文名稱(外文):Robust Emotion Recognition by Using a Temporal-Reinforced Approach
指導教授:宋開泰
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電控工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:102
中文關鍵詞:基本情緒辨識混合情緒辨識情緒Likelihood辨識連續影像情緒辨識智慧音樂選曲系統
外文關鍵詞:Basic emotion recognitionMixture emotion recognitionLikelihood emotion recognitionContinuous emotion recognitionIntelligent music selection system
相關次數:
  • 被引用被引用:1
  • 點閱點閱:343
  • 評分評分:
  • 下載下載:23
  • 收藏至我的研究室書目清單書目收藏:0
本論文之主旨在研究基於連續影像之情緒辨識方法,文中提出一套基於連續時間關聯資訊之情緒辨識與描述之方法。本方法首先透過主動外觀模型(Active appearance model, AAM)產生人臉影像樣本之形狀模型與紋理模型,擷取人臉特徵點及幾何特徵值,再由相關向量機(Relevance vector machine, RVM) 辨識情緒狀態。在辨識設計方面,本研究透過時序分析,辨識情緒類別之可能性(Likelihood),並將辨識結果轉換至二維Arousal與Valence平面(Arousal-Valence Plane, A-V Plane),以利系統之反應設計。所發展之方法能針對情緒程度、類別比例等資訊做更細微之辨識,且能夠分析情緒之轉變過程。經由實驗驗證,所發展之方法確能有效提升情緒辨識之效能,對基本表情之辨識率可達95%以上,對複雜情緒亦能做有效之辨識。為驗證本方法線上(on-line)辨識之效果,本論文設計一套基於人臉情緒辨識之智慧音樂選曲系統,此系統可藉由即時人臉情緒辨識,選取適當之音樂進行播放,透過音樂將使用者情緒逐漸導向至目標情緒。
In this thesis, a temporal-reinforced approach to enhancing emotion recognition from facial images has been developed. Shape and texture models of facial images are computed by using active appearance model (AAM), from which facial feature points and geometry feature values are extracted. The extracted features are used by relevance vector machine (RVM) to recognize emotional states. In this work, we propose a temporal analysis approach to recognizing likelihood of emotional categories, such that more subtle emotion, such as degree and ratio can be obtained. Furthermore, a method is developed to map the recognition result to the Arousal-Valence Plane (A-V Plane). Experimental results verify that the performance of emotion recognition is enhanced by the proposed method. Furthermore, the A-V values are applied to an intelligent music selection system. With emotion recognition of current A-V values, appropriate songs are selected and played by this system to change a person emotion towards a target emotion.
摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
圖例 vii
表格 ix
第一章、 緒論 1
1.1. 研究動機 1
1.2. 相關研究回顧 4
1.2.1. 人臉情緒辨識方法 4
1.2.2. 程度、比例、連續時間人臉情緒辨識方法 6
1.2.3. 相關向量機原理 8
1.2.4. 可能性分析相關理論 11
1.3. 問題描述 13
1.4. 系統架構與章節說明 14
第二章、人臉偵測與特徵點擷取 16
2.1. 人臉偵測 16
2.1.1. 人臉區域決定 16
2.1.2. 人臉影像正規化 20
2.2. 主動外觀模型 20
2.3. 人臉形狀模型 21
2.3.1. 標註特徵點 21
2.3.2. 平均人臉形狀 22
2.3.3. 模擬臉部形狀變化 24
2.4. 人臉紋理模型 25
2.4.1. 人臉紋理 25
2.4.2. 分段仿射變形 26
2.4.3. 模擬臉部紋理變化 27
2.5. 影像校正演算法 28
2.5.1. Inverse Compositional演算法 29
2.5.2. 整體形狀正規化轉換 31
2.5.3. 梯度影像修正 33
2.5.4. 直方圖等化 34
2.5.5. 影像校正整體方法 34
第三章、基於機率之連續影像情緒辨識 36
3.1. 人臉特徵擷取 36
3.2. 特定情緒辨識 40
3.3. 類別可能性 41
3.3.1. 類別可能性辨識 42
3.3.2. 類別可能性耦合 47
3.3.3. 類別可能性時序分析 50
3.4. 基於A-V Plane之情緒狀態辨識 53
第四章、實驗結果 61
4.1. 特徵點偵測結果 61
4.2. 基本情緒辨識結果 64
4.3. 混合情緒辨識結果 68
4.3.1. 問卷調查結果與辨識結果相關性評估 72
4.3.2. 整體情緒類別RMSE評估 72
4.3.3. 混合情緒類別組合評估 74
4.4. 基於A-V Plane之情緒辨識結果 74
4.5. 基於情緒辨識之音樂選取實驗 78
4.5.1. 音樂資料庫 78
4.5.2. 音樂選取設計 78
4.5.3. 音樂選取實驗結果 82
第五章、結論與未來展望 92
5.1. 結論 92
5.2. 未來展望 93
參考文獻 94
附錄一、基本人臉情緒混合比例問卷調查樣張 100

[1] A. Sharkey and N. Sharkey, “Children, and Elderly, and Interactive Robots Anthropomorphism and Deception in Robot Care and Companionship,” IEEE Robotics &; Automation Magazine, vol. 18, no. 1, pp. 32-38, 2011.
[2] “PARO,” available: www.parorobots.com.
[3] “Sony-AIBO,” available: www.sony-aibo.co.uk.
[4] “Gecko System - Mobile Robot Solution for Safety, Security and Service,” available: www.geckosystems.com/markets/CareBot.php.
[5] “Robosoft,” available: www.robosoft.com/robotic-solutions/healthcare/kompai /index.html.
[6] D. Hanson, “Hanson Robotics Inc.,” available: www.hansonrobotics.com.
[7] A. Vinciarelli, M. Pantic, D. Heylen, C. Pelachaud, I. Poggi, F. D. Errico and M. Schroeder, “Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing,” IEEE Trans. Affective Computing, no. 1, vol. 3, pp. 69-87, 2012.
[8] G. S. Shergill, A. Sarrafzadeh, O. Diegel and A. Shekar, “Computerized Sales Assistants: The Application of Computer Technology to Measure Consumer Interest – A Conceptual Framework,” Journal of Electronic Commerce Research, vol. 9, no. 2, pp. 176-191, 2008.
[9] S. Gregory, “Spy on Sports Fans,” TIME Ideas, 2013, available: http://ideas.time.com/2013/03/14/10-big-ideas/slide/spy-on-sports-fans/.
[10] B. T. Horowitz, “Cybercare: Will Robots Help the Elderly to Live at Home for Longer?” Scientific American, June 21, 2010.
[11] M. Swangnetr and D. B. Kaber, “Emotional State Classification in Patient-Robot Interaction Using Wavelet Analysis and Statistics-Based Feature Selection,” IEEE Trans. Human-Machine Systems, vol. 1, no. 43, pp. 63-75, 2013.
[12] “Aethon,” available: www.aethon.com.
[13] A. A. Salah and T. Gevers (eds.), “Computer Analysis of Human Behavior,” Springer, 2011, chapter 10.
[14] P. Ekman and W. V. Friesen, “Unmasking the Face: A Guide to Recognizing Emotion from Facial Clues,” Prentice Hall, New Jersey, 1975.
[15] J. A. Russell, “A Circumplex Model of Affect,” Journal of Personality &; Social Psychology, vol. 39, no. 6, pp. 1161–1178, 1980.
[16] W. Gu, C. Xiang, Y. V. Venkatesh, D. Huang and H. Lin, “Facial Expression Recognition Using Radial Encoding of Local Gabor Features and Classifier Synthesis,” Pattern Recognition, vol.45, no.1, pp.80-91, 2012.
[17] M. Song, D. Tao, Z. Liu, X. Li and M. Zhou, “Image Ratio Features for Facial Expression Recognition Application,” IEEE Trans. System Man and Cybernetics Part B-Cybernetics, vol. 42, no. 3, pp. 779-788, 2010.
[18] K. T. Song, M. J. Han and J. W. Hong, "Online Learning Design of an Image-Based Facial Expression Recognition System," To appear in Intelligent Service Robotics, Vol. 3, No. 3, pp. 151-162, 2010..
[19] K. T. Song and S. C. Chien, “Facial Expression Recognition Based on Mixture of Basic Expression and Intensities,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Seoul, South Korea, pp. 3123-3128, 2012.
[20] C. M. Whissell, “The Dictionary of Affect in Language,” Emotion: Theory, Research and Experience, New York: Academic Press, 1989.
[21] R. E. Thayer, “The Biopsychology of Mood and Arousal,” New York: Oxford University Press, 1989.
[22] I. Hupont, E. Cerezo and S. Baldassarri, “Sensing Facial Emotions in A Continuous 2D Affective Space,” IEEE Int. Conf. Systems, Man and Cybernetics, Istanbul, Turkey, pp. 2045-2051, 2010.
[23] P. Yang, Q. Liu, X. Cui and D. N. Metaxas, “Facial Expression Recognition Using Encoded Dynamic Features,” IEEE Conf. Computer Vision and Pattern Recognition, Anchorage, AK, USA, pp.1-8, 2008.
[24] S. Hommel and U. Handmann, “AAM Based Continuous Facial Expression Recognition for Face Image Sequences,” in Proc. IEEE Int. Symp. Computational Intelligence and Informatics, Budapest, Hungary, pp. 189-194, 2011.
[25] Y. Zhang and Q. Ji, "Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences," IEEE Trans. Pattern Analysis and Machine Intelligent, vol. 27, no. 5, pp. 699-714, 2005.
[26] M. A. Nicolaou, H. Gunes and M. Pantic, “Output-Associative RVM Regression for Dimensional and Continuous Emotion Prediction,” in Proc. IEEE Int. Conf. Automatic Face &; Gesture Recognition, Santa Barbara, CA, USA, pp. 21-24, 2011.
[27] N. Cristian and J. S. Taylor, “An Introduction to Support Vector Machines and Other Kernel-based Learning Methods,” New York: Cambridge University Press, 2000.
[28] M. E. Tipping, “Sparse Bayesian Learning and the Relevance Vector Machine,” Journal of Machine Learning Research, vol.1, no.3, pp. 211–244, 2001.
[29] A. Ethem, “Introduction to Machine Learning,” Second Edition. The MIT Press, Cambridge, Massachusetts, London, England, 2010.
[30] P. Viola and M. Jones, “Rapid Object Detection Using a Boosted Cascade of Simple Features,” IEEE Computer Society Conf. Computer Vision and Pattern Recognition, Kauai, HI, pp. 511- 518, 2001.
[31] R. Lienhart and J. Maydt, “An Extended Set of Haar-Like Features for Rapid Object Detection,” in Proc. Int. Conf. Image Processing, Rochester, NY, USA, pp. 900-903, 2002.
[32] T. F. Cootes, G. J. Edwards and C. J. Taylor, “Active Appearance Models.” in Proc. European Conf. Computer Vision, Springer, Berlin, pp 484–498, 1998.
[33] T. F. Cootes, G. J. Edwards and C. J. Taylor, “Active Appearance Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 6, pp. 681-685, 2001.
[34] T. F. Cootes and C. J. Taylor, “Technical Report: Statistical Models of Appearance for Computer Vision,” The University of Manchester School of Medicine, 2004.
[35] A. McAndrew, “Introduction to Digital Image Processing with Matlab,” Thomson Course Technology, 2004.
[36] 陳奕彣, 人臉辨識及表情辨識之整合設計, 碩士論文, 國立交通大學電機與控制工程學系, 2010.
[37] I. Matthews and S. Baker, “Active Appearance Models Revisited,” Int. Journal of Computer Vision, vol. 60, no.2, pp.135-164, 2004.
[38] C. Goodall, “Procrustes Methods in the Statistical Analysis of Shape,” Journal of the Royal Statistical Society B, vol 53, no.2, pp.285-339, 1991.
[39] J.R. Shewchuk, “Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator,” First Workshop on Applied Computational Geometry, Proceedings, Philadelphia, pp. 124-133, 1996.
[40] P. Ekman and W.V. Friesen, “Facial Action Coding System (FACS): A Technique for the Measurement of Facial Movement,” Palo Alto, Calif: Consulting Psychologists Press, 1978.
[41] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression,” IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA, pp. 94-101, 2010.
[42] M. Frieddman and A. Kandel, “Introduction to Pattern Recognition,” World Scientific, 1999.
[43] D. E. King, “Dlib-ml: A Machine Learning Toolkit,” Journal of Machine Learning Research 10, pp. 1755-1758, 2009.
[44] D. J. C. MacKay, “The Evidence Framework Applied to Classification Networks,” Neural Comput., vol. 4, no. 5, pp. 720–736, 1992.
[45] L. Tierney and J. B. Kadane, “Accurate Approximations for Posterior Moments and Marginal Densities,” Journal of the American Statistical Association, vol. 81, no. 393, pp.82-86, 1986.
[46] D. J. C. MacKay, “Bayesian Interpolation,” Neural Comput., vol. 4, no. 3, pp. 415–447, 1992a.
[47] F. Melgani and L. Bruzzone, “Classification of Hyperspectral Remote Sensing Images with Support Vector Machines,” IEEE Trans. Geoscience and Remote Sensing Symposium, vol. 42, no. 8, pp. 1778–1790, 2004.
[48] T. Hastie and R. Tibshirani, “Classification by Pairwise Coupling,” Annals of Statisitcs, vol. 26, no. 2, pp. 451-471, 1998.
[49] S. Kullback and R. A. Leibler, “On Information and Sufficiency,” Annals of Mathematical Statistics, vol. 22, no.1, pp. 79-80, 1951.
[50] Y. H. Yang, Y. F. Su, Y. Ch. Lin and H. Chen, “Music Emotion Recognition: the Role of Individuality,” in Proc. the International workshop on Human-centered Multimedia, Augsburg, Bavaria, Germany, pp.13-22, 2007.
[51] M. J. Han, C. H. Lin, and K. T. Song, “Robotic emotional expression generation based-on mood transition and personality model,” IEEE Trans. Cybernetics, 2012.
[52] J. A. Russell and M. Bullock, “Multidimensional Scaling of Emotional Facial Expressions: Similarity from Preschoolers to Adults,” Journal of Personality and Social Psychology, vol. 48, no. 5, pp. 1290–1298, 1985.
[53] S. Jain, H. Changbo and J. K. Aggarwal , “Facial Expression Recognition with Temporal Modeling of Shapes,” IEEE Int. Conf. Computer Vision Workshops, Barcelona, Spain, pp. 1642-1649, 2011.
[54] E. Silva, C. Esparza and Y. Mejia , “POEM-based Facial Expression Recognition, a New Approach,” Image, Signal Processing, and Artificial Vision, Antioquia, Colombia, pp. 162-167, 2012.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top