(18.210.12.229) 您好!臺灣時間:2021/03/05 12:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:林昱君
研究生(外文):Yu-chun Lin
論文名稱:以最短路徑為搜尋基礎之人體動作即時合成方法
論文名稱(外文):Real-Time Human Motion Synthesis by Shortest Path Searching
指導教授:林信志林信志引用關係
指導教授(外文):Hsin-Chih Lin
學位類別:碩士
校院名稱:國立臺南大學
系所名稱:數位學習科技學系碩士班
學門:教育學門
學類:教育科技學類
論文種類:學術論文
論文出版年:2008
畢業學年度:96
語文別:中文
論文頁數:37
中文關鍵詞:最短路徑問題Floyd-Warshall演算法人體動作合成動作擷取動作資料關鍵姿勢
外文關鍵詞:human motion synthesismotion datakey posturesFloyd-Warshall algorithmmotion captureshortest path searching
相關次數:
  • 被引用被引用:0
  • 點閱點閱:295
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:40
  • 收藏至我的研究室書目清單書目收藏:0
近年來電影、廣告或電玩中的虛擬人物 (avatar),均大量採用動作擷取設備 (motion capture) 錄製真人之動作資料 (motion data),讓虛擬人物可以演出各種逼真的動作。本研究以太極拳動作資料為例,將人體動作合成 (human motion synthesis) 視為一個在有向權重圖 (directed & weighted graph) 中搜尋最短路徑的問題。透過 Floyd-Warshall 演算法,本研究先在由動作資料庫所構成的有向權重圖中,搜尋所有轉換點 (transition points) 之間的最短路徑,並儲存為合成資訊 (synthesis information)。當使用者從動作資料庫中選取幾個必須出現的關鍵姿勢 (key postures) 後,本研究可以根據合成資訊,即時找出一條循序播放所有關鍵姿勢的最短播放路徑,並合成出使用者所需要的太極拳動作資料。在轉換點搜尋過程中,本研究考慮人體動作資料的每一個節點是否兼具位置及速度之連續性 (幾何上稱為 C1 連續性),以產生自然而流暢的連續動作。
In this study, we deal with the problem of real-time synthesizing new human motion data from an existing Tai-Chi Chuan (太極拳) motion database. The proposed system consists of three major steps, including (1) data preprocessing, (2) posture matching, and (3) motion synthesizer. After finding transition points in our Tai-Chi Chuan motion data, the problem of motion synthesis can be regarded as that of searching the shortest path in a directed and weighted graph, which is used to represent the relationship among transition points. To achieve real-time synthesis, the shortest path for each pair of transition points is found and stored in advance by the Floyd-Warshall algorithm.
Users can specify a number of key postures in an intuitive way; the proposed system will synthesize new motion data so that key postures can be played in user-specified order in the final motion. To make the synthesized motion smooth and natural-looking, we consider the position- and velocity-continuity for each pair of transition points based on the concept of C1 continuity. Experimental results reveal the effectiveness and efficiency of our approach.
摘 要 i
Abstract ii
誌 謝 iii
目 錄 iv
表目錄 v
圖目錄 vi
第一章、序論 1
第二章、文獻探討 2
第三章、系統架構 6
第四章、資料前處理 8
第一節、人體動作資料結構 10
第二節、雜訊處理 13
第五章、姿勢比對 15
第一節、轉換點搜尋 15
第二節、最短路徑搜尋 21
第六章、動作合成器 25
第七章、實驗結果 28
第八章、結論與未來研究 34
參考文獻 35
[1]D. M. Gavrila, "The visual analysis of human movement: A survey," Computer Vision and Image Understanding, vol. 73(1), pp. 82-98, 1999.
[2]F. Multon, L. France, M. P. Cani, and G. Debunne, "Computer animation of human walking: a survey," Journal of Visualization and Computer Animation, vol. 10(1), pp. 39-54, 1999.
[3]J. K. Aggarwal and Q. Cai, "Human motion analysis: A review," Computer Vision & Image Understanding, vol. 73(3), pp. 428-440, 1999.
[4]C. Babski, R. Boulic, and D. Thalmann, "A robust motion signature for the analysis of knee trajectories," in 5th International Symposium on 3-D Analysis of Human Movement, France, 1996.
[5]I. Kakadiaris and D. Metaxas, "Model-based estimation of 3D human motion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22(12), pp. 1453-1459, 2000.
[6]H. Fei and I. Reid, "Dynamic classifier for non-rigid human motion analysis," in 15th British Machine Vision Conference (BMVC), Kingston, 2004.
[7]C. Theobalt, M. Magnor, P. Schüler, and H. Seidel, "Combining 2D feature tracking and volume reconstruction for online video-based human motion capture," in Pacific Conference on Computer Graphics and Applications, Beijing, China, 2002.
[8]G. Mori and J. Malik, "Recovering 3D human body configurations using shape contexts," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28(4), pp. 1052-1062, 2006.
[9]A. Agarwal and B. Triggs, "Recovering 3D human pose from monocular Images," IEEE Trancsctions on Pattern Analysis and Machine Intelligence, vol. 28(1), pp. 44-58, 2006.
[10]C. Y. Chiu, C. C. Wu, Y. C. Wu, M. Y. Wu, S. P. Chao, and S. N. Yang, "Retrieval and constraint-based human posture reconstruction from a single image," Journal of Visual Communication and Image Representation, vol. 17(4), pp. 892-915, 2006.
[11]H. Sidenbladh, M. J. Black, and D. J. Fleet, "Stochastic tracking of 3D human figures using 2D image motion," in 6th European Conference on Computer Vision (ECCV 2000) , Dublin, Ireland, 2000.
[12]J. Assa, Y. Caspi, and D. C. Or, "Action synopsis: Pose selection and illustration," ACM Transactions on Graphics, vol. 24(3), pp. 667-676, 2005.
[13]A. Safonova, J. K. Hodgins, and N. S. Pollard, "Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces," ACM Transactions on Graphics vol. 24(3), pp. 686-696, 2004.
[14]Z. Popovi´c and A. Witkin, "Physically based motion transformation," in In Computer Graphics (SIGGRAPH''1999 Proceedings), Los Angeles, California., 1999.
[15]T. Mori and K. Uehara, "Extraction of primitive motion and discovery of association rules from motion data," in IEEE International Workshop on Robot and Human Communication, Bordeaux-Paris, France, 2001.
[16]M. Alex and O. Vasilescu, "Human motion signatures: Analysis, synthesis, recognition," in International Conference on Pattern Recognition, Quebec City, Canada, 2002.
[17]F. Liu, Y. Zhuang, F. Wu, and Y. Pan, "3D motion retrieval with motion index tree," Computer Vision and Image Understanding, vol. 92(2-3), pp. 265-284, 2003.
[18]C. Y. Chiu, S. P. Chao, M. Y. Wu, S. N. Yang, and H. C. Lin, "Content-based retrieval for human motion data," Journal of Visual Communication and Image Representation, vol. 15(3), pp. 446-466, 2004.
[19]E. Keogh, T. Palpanas, V. B. Zordan, D. Gunopulos, and M. Cardle, "Indexing large human-motion databases," in Thirtieth international conference on Very large data bases (VLDB Conference) , Toronto, Canada, 2004.
[20]C. Li and B. Prabhakaran, "Indexing of motion capture data for efficient and fast similarity search," Journal of Computers, vol. 1(3), pp. 35-42, 2006.
[21]L. Kovar and M. Gleicher, "Automated extraction and parameterization of motions in large data Sets," ACM Transactions on Graphics, vol. 23(3), pp. 559-568, 2004.
[22]J. Lee, J. Chai, and P. S. A. Reitsma, "Interactive control of avatars animated with human motion data," ACM Transactions on Graphics vol. 21(3), pp. 491-500, 2002.
[23]O. Arikan and D. A. Forsyth, "Interactive motion generation from examples," ACM Transactions on Graphics, vol. 21(3), pp. 483-490, 2002.
[24]O. Arikan, D. A. Forsyth, and J. F. O''Brien, "Motion synthesis from annotations," ACM Transactions on Graphics, vol. 33(3), pp. 402-408, 2003.
[25]C. Lu and N. J. Ferrier, "Automated analysis of repetitive joint motion," IEEE Transactions on Information Technology in Biomedicine, vol. 7(4), pp. 256-263, 2003.
[26]K. Yamane, J. K. Hodgins, and H. B. Brown, "Controlling a marionette with human motion capture data," in IEEE International Conference on Robotics & Automation, Taipei, Taiwan, 2003.
[27]S. P. Chao, C. Y. Chiu, S. N. Yang, and T. G. Lin, "Tai chi synthesizer: A motion synthesis framework based on key-postures and motion instructions," Computer Animation and Virtual Worlds, vol. 15(3-4), pp. 259-268, 2004.
[28]E. Hsu, K. Pulli, and J. Popović, "Style translation for human motion," ACM Transactions on Graphics vol. 24(3), pp. 1082-1089, 2005.
[29]S. Bouvier-Zappa, V. Ostromoukhov, and P. Poulin, "Motion cues for illustration of skeletal motion capture data," in Non-Photorealistic Animation and Rendering 2007 , San Diego, California, 2007.
[30]T. Kwon, Y. S. Cho, S. I. Park, and S. Y. Shin, "Two-character motion analysis and synthesis," IEEE Transactions on Visualization and Computer Graphics, vol. 14(3), pp. 707-720, 2008.
[31]A. Sch¨odl, R. Szeliski, D. H. Salesin, and I. Essa, "Video textures," in In Computer Graphics (SIGGRAPH''2000 Proceedings), New Orleans, 2000.
[32]Y. Li, T. Wang, and H. Y. Shum, "Motion texture: A two-Level statistical model for character motion synthesis," ACM Transactions on Graphics vol. 21(3), pp. 465-472, 2002.
[33]T. Huang, F. Li, and S. Zhan, "A three-level motion texture for human motion modeling," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006.
[34]M. Gleicher, "Animation from observation: Motion capture and motion editing," ACM SIGGRAPH Computer Graphics, vol. 33(4), pp. 51-54, 2000.
[35]M. Ma and P. McKevitt, "Building character animation for intelligent storytelling with the H-Anim standard," in Eurographics Ireland Chapter Workshop, Coleraine, University of Ulster, Northern Ireland, 2003.
[36]C. Tolga, E. Petajan, and Joern Ostermann, "Efficient modeling of virtual humans in MPEG-4," in IEEE International Conference on Multimedia And Expo (ICME), New York, 2000.
[37]G. Taubin, W. P. Horn, F. Lazarus, and J. Rossignac, "Geometry coding and VRML," Proceedings of the IEEE, vol. 86(6), pp. 1228-1243, 1998.
[38]D. Beard, "Using VRML to share large volumes of complex 3D geoscientific information via the Web," in International conference on 3D web technology , Columbia, Maryland 2006.
[39]L. Chittaro and R. Ranon, "Web3D technologies in learning, education and training: motivations, issues, opportunities," Computers & Education, vol. 49(1), pp. 3-18, 2007.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
系統版面圖檔 系統版面圖檔