跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.176) 您好!臺灣時間:2025/09/09 21:23
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林敏勤
研究生(外文):Min-Chin Lin
論文名稱:臉部微表情辨識系統之開發與應用
論文名稱(外文):The Development and Application of Facial Micro-Expression Recognition System
指導教授:林冠成林冠成引用關係
指導教授(外文):Kuan-Cheng Lin
口試委員:洪啟舜黃一泓
口試委員(外文):Chi-Shun HungYi-Hung Huang
口試日期:2016-06-29
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊管理學系所
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:中文
論文頁數:51
中文關鍵詞:學習情緒表情臉部動作編碼系統影像放大
外文關鍵詞:Academic EmotionsFacial Micro-ExpressionFACSVideo Magnification
相關次數:
  • 被引用被引用:7
  • 點閱點閱:1394
  • 評分評分:
  • 下載下載:179
  • 收藏至我的研究室書目清單書目收藏:2
有效的教學回饋可幫助教師獲知學生的學習狀況,更可讓教師作為其教學進度安排的依據,並提供其教學實務改進的方向。除了透過學生學習成效,學生的學習情緒也為教學回饋的方式之一。若能取得學生的學習情緒,則可以幫助教師掌握學生的完整學習狀況。
但是學生的學習情緒容易受抑制而不外顯於臉上,導致教師不易察覺學生真實的學習情緒。因此本研究將針對學習情境的微表情進行捕捉,微表情是一種本能反應,無法偽裝且無法隱藏,透過持續一段短時間的連續影像,觀察影像於時間序列中的變化,進而獲得影像的細微改變。
由於微表情的變化極細微且精妙,且出現的時間僅持續約1/20的時間,難以由肉眼觀察得到。本研究將利用麻省理工學院電腦科學與人工智慧研究室(MIT CSAIL)開發的影像放大(Video magnification)技術,針對學習情境的微表情進行處理,該技術可將平時沒有覺察出的細微改變做放大,使短暫且細微的微表情特徵顯而易見。
本研究以「臉部動作編碼系統(FACS)」為基礎,開發一套結合影像放大技術的臉部辨識系統:臉部微表情辨識系統(FMERS),用以分析學習情境之微表情。本研究將系統分析結果與學業情緒量表相互對照,兩者的一致性相當高,顯示學業情緒可直接由影像內容進行臉部表情辨識取得學習情況,有助於教師透過臉部表情掌握學生之學習狀況,藉以調整教學策略或測驗難度。且影片進行放大後,可改善微表情辨識,特別是對於正向表情的辨識,而負向表情則有增強的作用,表示放大技術對於增進表情辨識有顯著的效果。

Effective teaching feedback can help teachers acquire student learning conditions, but also revise their teaching schedule and improve their teaching practices. Apart from the students'' exam score, students’ facial expression is an important clue for teaching feedback. If we can get students'' academic emotions through their facial expression, it can help teachers acquire more information about student learning conditions.
However, the students’ facial expression is likely to control or conceal, teachers do not recognize it. Therefore, this study captured the students’ facial micro-expressions in class. Micro-expressions is an instinctive reaction, unable to disguise and hide. Play a short video in low speed, and then get the subtle change in the time sequence.
Due to the micro-expressions is very subtle and non-obvious. The micro-expressions appeared only about 1/20 second. It is difficult to obtain by the naked eye. This study will use a tool named “Eulerian Video Magnification” developed by MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) to magnify students’ facial micro-expressions. The technology named “Eulerian Video Magnification” can magnify tiny facial feature. The micro-expressions will be obvious and clear.
In this study, we develop a system combined “Eulerian Video Magnification” based on "Facial Action Coding System (FACS)". The goal is recognizing the students’ facial micro-expressions more efficiently. We compare the results of student’ questionnaire about their emotions and their facial micro-expressions, both of the consistency is rather high. It''s suggested that student’ learning emotions can be directly acquired using our system to analyze video. It will be bring benefit to teachers adjusting their teaching strategy and test difficulty. Moreover, the magnification videos is effective on recognizing facial micro-expressions, especially positive emotions. The magnification videos can enhance negative emotions. Our research results indicate that it''s useful using video magnification.

摘要 iv
ABSTRACT v
目次 vi
表目次 viii
圖目次 ix
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
第二章 文獻探討 3
2.1 微表情 3
2.2 微表情的取得方式 4
2.2.1 影像放大 5
2.3 微表情辨識 7
2.3.1 臉部動作編碼系統 8
2.4 EmguCV簡介 10
第三章 研究方法 11
3.1 臉部偵測與特徵區域影像處理 12
3.1.1 眉部區域之影像處理步驟 13
3.1.2 眼部區域之影像處理步驟 15
3.1.3 嘴部區域之影像處理步驟 15
3.2 特徵距離向量與動作單元判別 18
3.2.1 眉部特徵距離向量與動作單元(AU1、AU4) 18
3.2.2 眼部特徵距離向量與動作單元(AU5、AU7、閉眼) 19
3.2.3 嘴角特徵距離向量與動作單元(AU12、AU15、AU20) 20
3.2.4 上下唇特徵距離向量與動作單元(AU24、AU25、AU26、AU27) 21
3.3 表情辨識方式 22
3.4 辨識結果的辨識度與一致性 23
第四章 研究結果 24
4.1 實驗數據 24
4.2 臉部微表情辨識系統 25
4.2.1 系統功能介紹 25
4.2.2 系統開發環境 29
4.3 驗證辨識度 30
4.3.1 辨識結果 30
4.3.2 小結 31
4.4 比較與情緒量表的一致性 32
4.4.1 數據篩選 32
4.4.2 比較結果 32
4.4.3 小結 34
4.5 評估放大效果 36
4.5.1 放大前的一致性 37
4.5.2 放大後的一致性 40
4.5.3 小結 43
第五章 結論與建議 46
5.1 研究結果與討論 46
5.2 未來研究方向 47
參考文獻 48
附錄一 C語言程式設計之學習情緒數據 51

[1]張盈(方方土)、宋秋美、周啟葶、李雅婷、李懿芳、江芳盛、蔡佳燕、林明地、陳威良、葉連祺、楊家榆,教育學刊第35期: Educational Review Vol.35. 高等教育出版,2010。
[2]P. Ekman and W. V. Friesen, “The Facial Action Coding System: A Technique for The Measurement of Facial Movement”, Consulting Psychologists Press, San Francisco, 1978.
[3]R. W. Picard, “Affective Computing,” TR-321, MIT, Media Laboratory, 1995.
[4]吳俊霖,臉部表情分析應用於數位學習,國立東華大學資訊工程學系碩士論文,花蓮,2014。
[5]陳緯,數位學習環境增加社會臨場感對於自我調整能力、學習動機與學習成就之影響-以國小高年級學生為例,國立雲林科技大學資訊管理學系碩士論文,雲林,2009。
[6]蘇信宏,數位學習情意偵測專心程度之影像處理,北台灣科學技術學院機電整合研究所碩士論文,台北,2007。
[7]P. Ekman, and W. V. Friesen, “Nonverbal Behavior and Psychopathology”, The Psychology of Depression: Contemporary Theory and Research, Washington, D.C.: Winston & Sons, pp. 203-232, 1974.
[8]姜振宇,微表情 ─ 如何識別它人臉面真假?,鳳凰出版社,北京,2011。
[9]S. Polikovsky, Y. Kameda, and Y. Ohta., “Facial microexpressions recognition using high speed camera and 3Dgradient descriptor”, ICDP, 2009.
[10]T. Pfister, X. Li, G. Zhao, and M. Pietikainen, "Recognising spontaneous facial micro-expressions," in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 1449-1456.
[11]Li X, Pfister T, Huang X, Zhao G, Pietikäinen M (2013) A Spontaneous Micro-expression Database: Inducement, Collection and Baseline. 10th Proc Int Conf Autom Face Gesture Recognit (FG2013). Shanghai, China.
[12]Y. Yacoob and L. Davis, "Recognizing Human Facial Expressions from Long Image Sequences using Optical Flow," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, 1996.
[13]Yeasin, M.; Bullot, B.; Sharma, R. "From facial expression to level of interest: a spatio-temporal approach", Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, On page(s): II-922 - II-927 Vol.2 Volume: 2, 27 June-2 July 2004.
[14]P. Ekman, D. Matsumoto, & M. G. Frank, “The Micro-Expression Training Tool, v. 1. (METT1)”, 2001.
[15]Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, William T. Freeman,“Eulerian Video Magnification for Revealing Subtle Changes in the World”, ACM Transactions on Graphics, Volume 31, Number 4 (Proc. SIGGRAPH), 2012.
[16]Haggard, E. A. and K. S. Isaacs, “Micromomentary facial expression as indicators of ego mechanisms in psychotherapy”, In C. A., Gottschalk,A. Averback, (Eds.), Methods of research in psychotherapy. New York: Appleton-Century-Crofts., 1966.
[17]P. Ekman, Emotions Revealed: Understanding Faces and Feelings, Weidenfeld & Nicolson, 2003.
[18] Freitas-Magalhães, A., “Microexpression and macroexpression”, V. S. Ramachandran (Ed.), Encyclopedia of Human Behavior, Vol. 2, pp.173-183, 2012. Oxford: Elsevier/Academic Press.
[19]S. Godavarthy, “Microexpression spotting in video using optical strain”, Masters Thesis. University of South Florida, 2010.
[20]M. Shreve, S. Godavarthy, V. Manohar, D. Goldgof, and S. Sarkar. Towards macro-and micro-expression spotting in video using strain patterns. In Workshop on Applications of Computer Vision, pages 1-6, 2010.
[21]P. Ekman, “Facial Expressions of Emotion: An Old Controversy and New Findings”, Philosophical Transactions of the Royal Society (London) B335: 63–69, 1992.
[22]P. Ekman, “Basic Emotions”, T. Dalgleish and M. Power. Handbook of Cognition and Emotion. Sussex, UK: John Wiley & Sons, Ltd., 1999.
[23]Mehrabian, Albert; Wiener, Morton, “Decoding of Inconsistent Communications". Journal of Personality and Social Psychology, Vol 6, No 1, 1967, pp.109–114.
[24]Mehrabian, Albert; Ferris, Susan R, “Inference of Attitudes from Nonverbal Communication in Two Channels". Journal of Consulting Psychology, Vol 31, No 3, pp.248–252, 1967.
[25]Condon, W. S., “Synchrony units and the communicational hierarchy”, Western Psychiatric Institute & Clinics, Pittsburgh, PA, 1963.
[26]Gottman, J. and Levenson, R.W., “A Two-Factor Model for Predicting When a Couple Will Divorce: Exploratory Analyses” Using 14-Year Longitudinal Data, Family Process, 41 (1), p. 83-96, 2002.
[27]Neal Wadhwa , Michael Rubinstein , Frédo Durand , William T. Freeman, Phase-based video motion processing, ACM Transactions on Graphics (TOG), v.32 n.4, July 2013.
[28]A. Pease, “Body language how to read others’ thoughts by their actions”, Sheldon Press, London, 1981.
[29]江坤祥,數位學習之臉部表情分析,國立國立中興大學資訊管理學系碩士論文,台中,2012。
[30]G. Bradski and A. Kaehler, “Learning OpenCV,” CA: O’Reilly Media, 2008.
[31]Lyons M J. The Japanese Female Facial Expression (JAFFE) Database [DB], http://www.mis.atr.co.jp/~mlyons/jaffe.html, 1998.
[32]曾華薇,結合情意計算與動畫繪本教學應用於幼兒情緒教育之研究,國立國立中興大學資訊管理學系碩士論文,台中,2013。
[33]張家瑜,程式設計課程中圖文教學策略對認知負荷與學業情緒的影響,國立國立中興大學資訊管理學系碩士論文,台中,2016。
[34]陳筠筑,探討不同遊戲學習方式對認知負荷及學業情緒的影響,國立國立中興大學資訊管理學系碩士論文,2016。

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊