跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.110) 您好!臺灣時間:2025/12/02 12:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:周任中
研究生(外文):Jen-Chung Chou
論文名稱:人臉特徵點追蹤及面部表情分析
論文名稱(外文):Feature Point Tracking of Human Face and Facial Expression Analysis
指導教授:陳永昌陳永昌引用關係
指導教授(外文):Prof. Yung-Chang Chen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2000
畢業學年度:88
語文別:英文
論文頁數:62
中文關鍵詞:特徵點表情分析追蹤即時網狀架構面部動態參數異教徒
外文關鍵詞:feature pointexpression analysistrackingreal timemeshfacial animation parameterpagan
相關次數:
  • 被引用被引用:0
  • 點閱點閱:404
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在虛擬會議系統中,面部物件的變動是所有使用者的注意焦點。而為了要表現這些面部特徵的細微變化,須要有追蹤特徵點的演算法去追蹤它們的運動,並且為了讓使用者能理解,以表情分析將這些追蹤到的結果轉譯到面部動態參數上面去。所以在這篇論文中我們提出了一些方法去解決特徵點追蹤和表情分析這兩個問題。
首先,我們提出一個以極少運算複雜度卻達到中上準確度的運算法去追蹤特徵點的位置。這個演算法包含了註冊和追蹤這兩個階段。在註冊階段可以獲得所有特徵點的初始位置和一些之後要用的資訊,然後在追蹤階段以空間和時間上的資訊去追蹤主要特徵點。其中運用了空間上的資訊像是對比、遮罩值去描繪各特徵點的特性並且加上在時間軸上的區塊搜尋法以代表時域上的關連性。
雖然即時特徵點追蹤的演算法可以追蹤到嘴部外圍輪廓的特徵點,但是嘴巴內部的特徵點卻無法以傳統的方法像是對比或是區塊搜尋法追蹤到。為了要解決嘴巴內部的問題提出了一個以網狀架構為基礎的方法,而不同於以往的是我們採用了階層式的網狀架構。它可以減少網狀模型的搜尋範圍並且擁有較佳的表現。和即時特徵點追蹤的演算法相似的是,以網狀架構為基礎的追蹤方法也分成兩個階段。在註冊階段執行模型套合的動作,也就是將原本的嘴部模型套到這個使用者的嘴部區域,然後在追蹤階段以表面和顏色的資訊輪流調整三層網狀架構中所有的頂點。而在調整完之後就可以觀察到新的嘴部形狀和所有頂點的運動。
如果只有追蹤的結果是無法被使用者所了解,所以要轉譯成面部動態參數。而這些面部動態參數具有區域上的相關性,故所有相關的參數應該同時被調整。所以這個對應關係就變成了多對多的問題,並且得使用一個最小近似解的演算法以極小的誤差去解出答案。如此一來就可以將這些追蹤結果轉譯到面部動態參數後也完成了低階的面部表情分析。
以我們的實驗結果得知,即時特徵點追蹤的演算法在追蹤嘴巴外圍輪廓的特徵點上可以達到我們要求的表現,並且以網狀架構為基礎的方法在給予外圍四個控制點的正確位置後可以解決嘴巴內部的情況。由這兩個方法追蹤到的結果,低階表情分析以相當小的對應錯誤將突出特徵點的運動成功地轉譯到面部動態參數上去。

In virtual conferencing systems, the changes of facial objects on human faces are major focus of all users. In order to represent the detail changes of facial features, the algorithms for feature point tracking are developed to track motion of these features and expression analysis is applied to translate the tracking result to facial animation parameters for user’s understanding. Hence, in this thesis, we propose methods and algorithms for these two major problems.
First, a real-time feature point tracking algorithm is presented to achieve medium to high accuracy with minimized computation complexity. The tracking algorithm is separated into two phases including the registration phase and the tracking phase. Initial locations of all feature points are obtained in registration phase and also some information is registered for the next phase. In the tracking phase, major feature points are tracked by spatial and temporal information. Taking advantage of spatial information such as contrast, mask value is used to describe characteristic of each feature point and temporal block template matching is also added for tracking temporal relations. In this algorithm, six salient feature points for eyes, four for eyebrows and six for mouth region are tracked.
Although the real-time feature point tracking algorithm can track feature points on outer contour robustly, those inside the mouth cannot be tracked by ordinary methods such as contrast or block matching. A mesh-based method is presented here for solving the task inside the mouth. Comparing to the traditional mesh structure, a hierarchical mesh model is applied in our system. It can minimize the search space of the mesh model and have better performance. Similar to the real-time feature point tracking algorithm, mesh-based tracking method is divided in two phases. The registration phase performs the model adaptation for fitting the original mouth model to current user’s mouth region. Then in the tracking phase, all vertices in our three-leveled mesh model are adjusted in turns from coarse level to fine level by texture and color information. After the adjustment, the new shape of the mouth and motion of all vertices can be observed.
Acquiring only the tracking result cannot be understood by users, and needs to be translated to the facial animation parameters. Since the facial animation parameters are dependent regionally, all relative parameters are supposed to be adjusted at the same time. Hence, the mapping becomes a many-to-many problem, and a modified least-squares algorithm is adopted to solve this mapping problem with minimized errors. After translating the tracking result to the facial animation parameters, low-level expression analysis is accomplished.
From our experimental result, the real-time feature point tracking algorithm achieves required performance in tracking four dominant feature points on outer contours. The mesh-based method can solve the complexity situations inside the mouth given correct positions of four major control vertices on outer contours. From the tracking result of the previous two methods, the low-level expression analysis successfully interprets the motion of salient feature points to facial animation parameters with very low mapping errors.

Chapter 1.
Introduction
1.1 Virtual Conferencing
1.2 Feature Point Tracking and Expression Analysis
1.3 Related Work
1.4 Thesis Organization
Chapter 2.
Real-time Feature Point Tracking
2.1 Feature Extraction
2.1.1 Face Extraction
2.1.2 Feature Block Extraction
2.1.3 Feature Point Extraction
2.1.4 Feature Information Registration
2.2 Feature Tracking
2.2.1 Face Tracking
2.2.2 Feature Block Tracking
2.2.3 Feature Point Tracking
2.2.4 Validation of Tracking Result
2.3 Experimental Results and Discussion
Chapter 3.
Mesh-based Feature Point Tracking
3.1 Feature Extraction
3.1.1 Mesh-based Method
3.1.2 Mesh Model Adaptation
3.2 Feature Tracking
3.2.1 Coarse-to-fine Mesh-based Tracking
3.3 Experimental Results and Discussion
Chapter 4.
FAP Mapping
4.1 MPEG-4 Facial Animation
4.1.1 Facial Animation Parameters
4.1.2 Facial Definition Parameters
4.1.3 FAP Interpolation Table
4.1.4 I.S.T. Model
4.2 FAP Mapping
4.2.1 FAP Mapping Method
4.2.2 FAP Mapping of Mouth Region
4.2.3 FAP Mapping of Eye Region
4.3 Experimental Results and Discussion
Chapter 5.
Conclusion and Future Work
Reference

[1] Yao-Jen Chang, Chih-Chung Chen, Jen-Chung Chou, and Yung-Chang Chen, “Implementation of a Virtual Chat Room for Multimedia Communications,” IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing(MMSP99), Copenhagen, Denmark, Sept. 13-15, 1999.
[2] C. E. Priebe, “Adaptive mixtures,” Journal of the American Statistical Association, Vol.89, No.427, pp. 796-806, 1994.
[3] Ram R. Rao, Tsuhan Chen, “Audio-to-Visual Conversion for Multimedia Communication,” IEEE Transactions on Industrial Electronics, Vol. 45, No.1, pp. 15, Feb, 1998
[4] “Text of ISO/IEC FDIS 14496-1: Systems,” ISO/ IEC JTC1/SC29/WG11 N2501, Atlantic City MPEG Meeting, Oct. 1998.
[5] “Text of ISO/IEC FDIS 14496-2: Visual, “ ISO/IEC JTC1/SC29/WG11 N2502, Atlantic City MPEG Meeting, Oct. 1998.
[6] G. A. Abrantes, and F. Pereira, “MPEG-4 Facial Animation Technology: Suvey, Implementation, and Results,” IEEE Trans. on CSVT, Vol. 9, No. 2, pp. 290-305, March 1999.
[7] A. L. Yuille, P. Hallinan, and D. S. Cohen, “Feature extraction from faces using deformable templates,” Int. J. Computer Vision, Vol.8, pp.99-112, August 1992.
[8] M. Kass, A. Witkin, and D. Tezopoulos, “Snakes : active contour models,” Int. J. Computer Vision, pp. 321-331, 1988.
[9] J. Luettin, N. A. Thacker, and S. W. Beet, “Locating and Tracking Facial Speech Features,” Proceedings of the International Conference on Pattern Recognition, Vienna, Austria, 1996.
[10] Y. L. Tian, T. Kanade, and J. F. Cohn, “Robust Lip Tracking by Combining Shape, Color and Motion,” Proceedings of ACCV 2000, pp1040-1045, Jan, 2000.
[11] S. Basu, N. Oliver, and A. Pentland, “3D Modeling and Tracking of Human Lip Motions,” Proceedings of ICCV 98, Bombay, India, January 4-7, 1998.
[12] P. Eisert, T. Wiegand, and B.Girod, “Model-Aided Coding : A New Approach to Incorporate Facial Animation into Motion-Compensated Video Coding,” IEEE TR-CSVT, Special Issue on 3D Video Technology, pp1-15, 1999.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊