跳到主要內容

臺灣博碩士論文加值系統

(35.153.100.128) 您好!臺灣時間:2022/01/19 03:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:易振弘
研究生(外文):Zhen-Hong Yi
論文名稱:一精簡MPEG-4臉部動態參數表情控制方法
論文名稱(外文):A Simplified MPEG-4 FAP Method for Expression Control
指導教授:陳進興陳進興引用關係
指導教授(外文):Chin-Hsing Chen
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:75
中文關鍵詞:統計特徵點臉部動態參數人臉表情
外文關鍵詞:Facial Animation ParametersFacial ExpressionFeature PointsStatistics
相關次數:
  • 被引用被引用:0
  • 點閱點閱:228
  • 評分評分:
  • 下載下載:31
  • 收藏至我的研究室書目清單書目收藏:0
為了合成與模擬人臉動作表情,MPEG-4制訂了臉部物件(Face Object)。MPEG-4臉部物件包括三部分:臉部動畫參數(Facial Animation Parameters)、臉部定義參數(Facial Definition Parameters)和FAP內插表(FAP Interpolation Table)。
本論文提出一利用對特徵點分析得到的統計來精簡FAP參數的方法。統計分成兩部分:全域統計及局部統計。全域統計的目的在選取因臉部整體運動造成頻繁移動的全域特徵點。本論文利用方塊比對算出每一畫面全域特徵點的位移向量,若位移向量大於臨界值,則判定有變動,反之則無。據此我們選出頭部上下移動(Pitch)、左右轉動(Yaw)和傾斜移動(Roll)三個移動最頻繁的全域特徵點構成T字作為臉部座標,並對此3個全域特徵點對應的MPEG-4 FAP編碼。局部統計焦點放在嘴巴和眉毛兩個區域。根據全域統計結果及MPEG-4定義的FAPs,本論文選擇以上兩個區域作局部統計分析。局部統計的目的在於找出局部特徵點間的相關性,以作為內插時權重計算的依據。本論文利用warping的觀念追蹤嘴巴和眉毛這兩個區域的局部特徵點,算出相對的權重,依據這些權重本論文選出5個MPEG-4的FAPs對其編碼,並根據局部相關性內插出其餘FAPs。本論文提出的系統總共編碼及傳輸8個FAPs,其中包括了3個全域FAPs及5個局部FAPs。
為了測試本論文提出方法所合成出來的人臉品質,我們比較使用68個FAPs和8個FAPs分別控制一臉部模型所得的PSNR。實驗結果顯示使用本論文所提出的系統其PSNR高於30 dB,而bit-rate都在1.38 kbits/sec以下。
In order to synthesize and simulate face activity and expression, MPEG-4 defined the face object. MPEG-4 specified three types of data for a face object: the Facial Animation Parameters (FAPs), Facial Definition Parameters (FDPs) and FAP Interpolation Table (FIT).
This thesis proposes a method for simplifying FAPs. The method is based on the statistics obtained by analyzing feature points. The statistics is divided into two types: global and local. The global statistics is concerned with extracting high-activity feature points of an entire head model. The proposed method employs block-matching to calculate the motion vector of a global feature for each frame. If the magnitude of a motion vector is above the specified threshold then it is regarded active; otherwise, not active. According to the global statistics, 3 feature points: head pitch, yaw, and roll are selected to constitute a T shape coordinate system for a face. The corresponding MPEG-4 FAP for these 3 global feature points are then coded. In local statistics, we focus on the mouth and eyebrows area. From global statistical results and the FAPs defined in MPEG-4 we select the above two areas to obtain local statistics. The local statistics is concerned with the local movement of facial muscle and the correlation between them. The proposed method employs the warping concept to gather the motion statistics of local features. According to the local statistics, 5 corresponding MPEG-4 FAPs are selected and other FAPs are interpolated according to their local correlation with the former. The proposed system totally uses 8 FAPs to control a synthetic face model, including 3 global FAPs and 5 local FAPs.
To test the quality of the synthesized face expression we compare the PSNR between using 68 FAPs and using 8 FAPs. The experiments showed that the PSNR of the proposed system is above 30 dB and the bit-rate is less than 1.38 kbits/sec.
CONTENTS
ABSTRACT I
CONTENTS V
FIGURE CAPTIONS VII
TABLE CAPTIONS IX
CHAPTER 1 Introduction 1
1.1 Motivation 2
1.2 System Overview 3
1.3 Thesis Organization 4
CHAPTER 2 Background and Review 6
2.1 Analysis of Facial Animation from an Image Sequence 8
2.2 Facial Muscular Deformation Method 9
2.3 Direct Parameterized Facial Animation 12
2.4 Facial Animation Tools in MPEG-4 14
CHAPTER 3 Facial Animation in MPEG-4 15
3.1 Facial Animation Parameters 16
3.1.1 Neutral Face and Facial Animation Parameter Units 19
3.2 Facial Definition Parameters 21
3.3 FAP Interpolation Table 24
3.4 Face Object Syntax 25
3.4.1 The Face Object 26
3.4.2 The Face Object Plane Header 27
3.4.3 The Face Object Plane Data 28
3.4.4 Types of Face Object Decoding 31
CHAPTER 4 Statistical Analysis of Facial Feature Points 34
4.1 Global Statistics of Facial Animation 35
4.1.1 Selection of Global Features 35
4.1.2 Detection of Global Features 36
4.1.3 Tracking of Global Features Using Block Matching 38
4.1.3.1 Three-Step Search Algorithm 40
4.1.4 Activeness Calculation of Global Features 41
4.2 Local Statistics of Facial Animation 42
4.2.1 Selection of Control Nodes 44
4.2.2 Tracking of Local Features Using Affine Transformation 46
CHAPTER 5 Experiment Results 51
5.1 Global Statistics Result 51
5.2 Local Statistics Result and FAP Selection 53
5.2.1 Weight Calculation of Feature Points and FAP Selection in Eyebrow 54
5.2.2 Weight Calculation of Feature Points and FAP Selection in Mouth 58
5.3 PSNR Comparison 67
CHAPTER 6 Conclusions 71
6.1 Conclusions 71
Reference 72
REFERENCES
[1]F. Lavagetto and R. Pockaj, “An Efficient Use of MPEG-4 FAP Interpolation for Facial Animation at 70 bits/Frame,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 11, pp. 1085-1097, Oct. 2001.
[2]H. Chernoff, “The Use of Faces to Represent Points in N-Dimensional space graphically,” Technical Report Project NR-042-993, Office of Naval Research, Washington, DC, Dec. 1971.
[3]K. Waters, “A Muscle Model for Animating Three-Dimensional Facial Expressions,” Computer Graphics (SIGGRAPH ’87), Vol. 21, pp. 17-24, July 1987.
[4]L. Williams, “Performance Driven Facial Animation,” Computer Graphics (ACM SIGGRAPH’90), Vol. 24, pp. 235-242, 1990.
[5]A. J. Fridlund, P. Ekman and H. Oster, “Facial Expressions of Emotion: Review Literature, 1970-1983,” Nonverbal Behavior and Communication, A. W. Siegman and S. Feldstein, eds., pp. 143-224. Hillsdale, N.J.: Lawrence Erlbaum Assoc., 1987.
[6]P. Ekman and W. V. Friesen, Facial Action Coding System (FACS): Manual, Palo Alto: Consulting Psychologists Press, 1978.
[7]F. I. Parke, “Parameterized Models for Facial Animation,” IEEE Computer Graphics and Applications, Vol. 2, pp. 61-68, Nov. 1982.
[8]F. I. Parke and K. Waters, Computer Facial Animation, A. K. Peters Ltd., 1996.
[9]MPEG-4 Video and SNHC, “Text of ISO/IEC FDIS 14496-2: Visual,” Doc. ISO/IEC N2502, Atlantic City MPEG Meeting, Oct. 1998.
[10]A. M. Tekalp and J. Ostermann, "Face and 2-D Mesh Animation in MPEG-4", Image Communication Journal, Tutorial Issue on MPEG-4 Standard, Elsevier, 2000.
[11]G. A. Abrantes and F. Pereira, “MPEG-4 Facial Animation Technology: Survey, Implementation, and Results,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 9, pp. 290-305, Mar. 1999.
[12]H. Tao, H. H. Chen, and Y. Fisher, "FAP Interpolation Table (FIT)," MPEG-4 Document ISO/IEC JTC1/SC29/WG11 MPEG97/M2599, Aug. 1997.
[13]M. Sonka, V. Hlavac and R. Boyle, Image Processing, Analysis, and Machine Vision, PWS Publishing, 1998.
[14]R. Li, B. Aeng and M. L. Liou, “A New Three-Step Search Algorithm for Block Motion Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 4, pp. 438-442, Aug. 1994.
[15]A. Murat Tekalp, Digital Video Processing, Prentice-Hall PTR, 1995.
[16]K. R. Rao and J. J. Hwang, Techniques and Standards for Image, Video, and Audio Coding, Prentice Hall PTR, 1996.
[17]A. M. Tekalp, P. Van Beek, C. Toklu and B. Gunsel, “Two-Dimensional Mesh-Based Visual-Object Representation for Interactive Synthetic/Natural Digital Video,” Proceedings of the IEEE, Vol. 86, pp. 1029-1051, June 1998.
[18]L. Atzori, F. G. B. De Natale and C. Perra, “A Spatio-Temporal Concealment Technique Using Boundary Matching Algorithm and Mesh-Based Warping (BMA-MBW),” IEEE Transactions on Multimedia, Vol. 3, pp. 326-338, Sept. 2001.
[19]Y. Altunbasak and A. M. Tekalp, “Occlusion-Adaptive, Content-Based Mesh Design and Forwarding Tracking,” IEEE Transactions on Image Processing, Vol. 6, pp. 1270-1280, Sept. 1997.
[20]T. Wakahara, Y. Kimura and A. Tomono, “Affine-Invariant Recognition of Gray-Scale Characters using Global Affine Transformation Correlation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, pp. 384-395, Apr. 2001.
[21]P. Yao, G. Evans and A. Calway, “Using Affine Correspondence to Estimate 3-D Facial Pose,” Image Processing, IEEE International Conference, Vol. 2, pp. 919-922, 2001.
[22]Z. Yang, F. S. Cohen, “Image Registration and Object Recognition using Affine Invariants and Convex hulls,” IEEE Transactions on Image Processing, Vol. 8, pp. 934-946, July 1999.
[23]K. N. Ngan, T. Meier and D. Chai, Advanced Video Coding: Principles and Techniques, Elsevier Science B.V., 1999.
[24]G. Wolberg, Digital Image Warping, IEEE Computer Society Press, 1992.
[25]J. A. Stern, D. J. Schroeder, R. M. Touchstone, and N. Stoliarov, “Blinks, Saccades, and Fixation Pauses during Vigilance Task Performance: II. Gender and time of day,” ADA307 024 FAA Office of Aviation Medicine-Civil Aeromedical Institute Publications, Aviation Medicine Reports, 1996.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top