跳到主要內容

臺灣博碩士論文加值系統

(3.239.4.127) 您好!臺灣時間:2022/08/20 06:58
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:巫俊志
研究生(外文):Chun-Chih Wu
論文名稱:文字敘述與角色動畫之自動轉換機制
論文名稱(外文):Automatic Text to Character Animation Conversion Mechanism
指導教授:楊熙年
指導教授(外文):Shi-Nine Yang
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:中文
論文頁數:41
中文關鍵詞:動作合成動作擷取人體動畫互動式控制
外文關鍵詞:Motion synthesisMotion captureHuman motionInteractive control
相關次數:
  • 被引用被引用:2
  • 點閱點閱:371
  • 評分評分:
  • 下載下載:60
  • 收藏至我的研究室書目清單書目收藏:4
對於虛擬人的模擬系統而言,有效的表示與控制一個3D虛擬人模型是一件十分重要的事。一般市面上常用的動畫軟體,如Alias/Wavefront發展的Maya,Autodesk公司的3D StudioMax,雖然也具備可動態操控虛擬角色的環境介面,但是使用者必須受過特殊的專業訓練和動畫方面的技巧。對於一般使用者,我們希望能夠透過自然、直覺的方式與虛擬人互動。在本篇論文中,我們提出了由文字敘述自動轉換成角色動畫的機制。整個轉換機制有系統地對文字空間(word space)以及動作空間(motion space)做了自動的連結。文字空間中是由與動作相關的字詞當元素,字詞之間的相互連接當運算元(operator),組合出描述動作的一段句子。動作空間同樣地是基於基本元素和元素之間的連結以組合出動作。然而其基本元素並不容易決定。我們提出從動作空間中粹取出基本元素的技術,以及基本元素間相互接合的演算法。透過文字空間中對動作的描述,底層動作空間也能即時產生相對應的動作。
首先,我們分析目前字彙中與動作相關的字詞,及參考動畫系統中對動作描述的典範制訂上層的動畫描述語言。並製作動畫描述語言的解譯器,將指令與下層動作做連結。此外,虛擬角色必須具備豐富的動作行為。目前最常見的方法是利用動作擷取器(motion capture)捕捉大量運動資料而成的資料庫。然而,利用動作擷取器所記錄的動作資料屬於低階的畫格式(frame-based)表示法,不具備上層、適合即時動作合成的結構。我們提出了從動作資料中粹取出基本動作(primitive motion)的方法。基本動作之間的關連性用一個含機率的有限狀態機(probabilistic finite state automaton, PFSA)來表示。接著把粹取出的基本動作,利用之前制訂出的動畫描述語言加上註解。經由這整個步驟,我們將原本屬於低階畫格式的動作資料表示法轉換成具備上層語意、適合即時動作合成的結構化表示式。底層動作合成器負責基本動作之間的接合,產生平順、連續的動作。使用者即可簡單的透過語言描述,快速地合成出心中所想的角色動畫。最後我們以合成新的太極拳招式為例,來驗證所提出的架構之可行性。
In this thesis, we describe a new framework for synthesizing believable motion from high-level linguistic descriptions of human movement. The framework bridges the gap between the “word space” and the “motion space”. Word space consists of elements that are phrases relevant to movements and operators like concatenation. Motion space consists of elements and operators as well. However, elements in motion space are difficult to determine. We present techniques for extracting elements in motion space and for piecing them together.
At first, we analyze the existent phrases relative to the movements and construct the syntax of animation script language. The animation script language that defines the syntactic and semantic attributes for describing human movement is based on the XML Schema structures. Furthermore, we develop animation script translator to connect the linguistic descriptions with underlying motions.
Next, a large repertoire of character motions must be made available. One common solution to this problem is motion capture. However, on-line motion generation using motion capture data is difficult since the data is relatively unstructured. We present techniques for analyzing motion capture data to extract primitive motions. The inferred primitive motions constitute a probabilistic finite state automaton (PFSA). We annotate each primitive motion using our animation script language. After these steps, we transfer the low-level, frame-based motion capture data into a semantic structure appropriate for real-time motion synthesis. Motion synthesizer takes charge of seamlessly stitching the primitive motion into a smooth continuously streams of motion. Motion can then be simply generated by using linguistic descriptions of human movement. The motions are generated in real-time so that we can author novel, complex motions interactively. Our approach is demonstrated by many synthesized sequences of visual compelling Tai-Chi Chuang motion.
中文摘要 i
Abstract ii
Acknowledgement iii
Table of Contents iv
List of Figures vi
Chapter 1. Introduction 1
1.1 Algorithm outline 2
1.2 Thesis layout 3
Chapter 2. Related work 4
2.1 Virtual human standards 4
2.1.1 SNHC (Synthetic/Natural Hybrid Coding) 4
2.1.2 H-ANIM (Humanoid Animation Working Group) 6
2.2 Motion control 6
2.3 Motion analysis 8
Chapter 3. Text to character animation conversion framework…….9
3.1 High-level script language 10
3.2 Script translator 13
3.2.1 XML 13
3.2.2 Syntax checking 13
3.3 Animation generator 13
3.4 Virtual human information database 14
3.4.1 Skeleton information 14
3.5 Animation player 15
Chapter 4. Motion data analysis and motion synthesizer 16
4.1 Motion data analysis 16
4.1.1 Motion data clustering 17
4.1.2 Identifying semantic structure 20
4.2 Motion synthesizer 22
Chapter 5. Results 30
Chapter 6. Conclusion and future work 39
References 40
Amaya, K., Bruderlin, A., and Calvert, T. 1996. Emotion from Motion. In Proceedings of Graphics Interface 1996.
Arikan, O., and Forsyth, D. A. 2002. Interactive Motion Generation form Examples. In Proceedings of ACM SIGGRAPH 2002.
Badler, N., Bindiganavale, R., Schuler, W., Allbeck, J., Joshi, A., and Palmer, M. Dynamically Altering Agent Behaviors Using Natural Language Instructions. In Autonomous Agents 2000.
Bruderlin, A. 1995. Procedural Motion Control Techniques for Interactive Animation of Human Figures. PhD thesis, Simon Fraser University.
Bruderlin, A., and Williams, L. 1995. Motion Signal Processing. In Proceedings of ACM SIGGRAPH 1995.
Cassell, J., Vilhjalmsson, H., and Bickmore, T. 2001. BEAT: the Behavior Expression Animation Toolkit. In Proceedings of ACM SIGGRAPH 2001.
Chi, D., Costa, M., Zhao, L., and Badler, N. 2000. The EMOTE Model for Effort and Shape. In Proceedings of ACM SIGGRAPH 2000.
Galata, A., Johnson, N., and Hogg, D. 2001. Learning Variable Length Markov Models of Behaviour. In Journal of Computer Vision and Image Understanding 2001.
Jang, J.-S. R., Sun, C.-T. and Mizutani, E. 1997. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence.
Kovar, J., Gleicher, M., and Pighin, F. 2002. Motion Graphs. In Proceedings of ACM SIGGRAPH 2002.
Lee, J., Chai, J.-X., Reitsma, P. S. A., Hodgins, J. K., and Pollard, N. S. 2002. Interactive Control of Avatars Animated with Human Motion Data. In Proceedings of ACM SIGGRAPH 2002.
Li, Y., Wang, T. and Shum, H.-Y. 2002. Motion Texture: A Two-Level Statistical Model for Character Motion Synthesis. In Proceedings of ACM SIGGRAPH 2002.
Nevill-Manning Craig G. and Witten Ian H. 1997. Identifying Hierarchical Structure in Sequences: A linear-time algorithm. In Journal of Artificial Intelligence Research.
Park, S. I., Shin, H. J., and Shin S. Y. 2002. On-line Locomotion Generation Based on Motion Blending. In ACM SIGGRAPH Symposium on Computer Animation 2002.
Perlin, K., and Goldberg, A. 1996. Improv: A System for Scripting Interactive Actors in Virtual Worlds. In Proceedings of ACM SIGGRAPH 1996.
Rose, C., Chohen, M. F., and Bodenheimer, B. 1998. Verbs and Adverbs: Multidimensional Motion Interpolation. In IEEE Computer Graphics and Applications 1998.
Unuma, M., Anjyo, K., and Takeuchi, R. 1995. Fourier Principles for Emotion-based Human Figure Animation. In Proceedings of ACM SIGGRAPH 1995.
Wang, T., Shum, H.-Y., Xu, Y.-Q., and Zheng N.-N. 2001. Unsupervised Analysis of Human Gestures. In Proceedings of PCM 2001.
Wei, L. Y., and Levoy, M. 2000. Fast texture synthesis using tree-structured vector quantization. In Proceedings of ACM SIGGRAPH 2000.
Witkin, A. P., and Popvic, Z. 1995. Motion warping. In Proceedings of ACM SIGGRAPH 1995.
Zhao, T., Wang, T., and Shum, H.-Y. 2002. Learning A Highly Structured Motion Model for 3D Human Tracking. In Proceedings of ACCV 2002.
王子和,歐業超. 1998 “太極拳縮影”,養正堂文化.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top