(3.230.154.160) 您好!臺灣時間:2021/05/07 18:36
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:崔瀅和
研究生(外文):Ying-Ho Tsuei
論文名稱:運用視訊中人體特徵點追蹤與定位之代理人同步系統
論文名稱(外文):An Agent Synchronization System Using Human Feature Point tracking and Localization in Video Sequences
指導教授:張元翔張元翔引用關係
指導教授(外文):Yuan-Hsiang Chang
學位類別:碩士
校院名稱:中原大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:中文
論文頁數:61
中文關鍵詞:運動分析人機互動介面視訊處理特徵追蹤
外文關鍵詞:human computer interactionvideo processingmotion analysisFeature tracking
相關次數:
  • 被引用被引用:1
  • 點閱點閱:277
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
建立模型通常是很多科技應用的第一步。在電腦動畫或電影中,模型的運動通常是使用運動擷取系統對人類運動的估計來驅動。但是,這樣的系統通常過於昂貴,因此一般大眾無法在日常生活應用中負擔該系統。為了解決這樣的問題,我們提出一個運用視訊中人體特徵點追蹤與定位之代理人同步系統,其方法包含:人體特徵點定義、分層式特徵點追蹤、特徵點定位、與代理人同步。本系統採用兩組視訊,分別為「對稱手臂運動行為」及「非對稱手臂運動行為」,以建立與人體運動相對應且同步之代理人。從研究結果可明顯看出,本系統在合理範圍內可追蹤及定位人體特徵點。總結而言,本系統在建立同步模型時,可提供具成本效益的解決方案,進而可併入虛擬環境互動軟體中,以加強人與人之間的互動,亦可應用於電腦動畫及電影當中,以產生與人體運動相對應之模型運動視訊。
Building a model is often the first step for many technical applications. In computer animations or movies, motion of the model is typically driven using human motion that is estimated by a motion capturing system. But, such a system is often costly and may not be affordable for the general public in daily applications. To solve the problem, we propose an agent synchronization system using human feature point tracking and localization in video sequences. The method includes: human feature point definition, level-based feature point tracking, feature point localization, and agent synchronization. Our system is demonstrated using two video sequences, namely “symmetric hand motion behavior” and “asymmetric hand motion behavior”, to build an agent that exhibits matched human motion behavior in synchronization. The results clearly showed that our system could reasonably trace and localize human feature points. In conclusion, our system may offer cost-effective solution in building synchronization models, and could be incorporated into interactive interfaces in a virtual environment to enhance human interaction, or computer animations and movies to generate video sequences with model motion corresponding to human motion.
目錄
摘要 Ⅰ
Abstract Ⅱ
致謝 Ⅲ
目錄 Ⅳ
圖索引 Ⅵ
表索引 Ⅶ
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 相關研究 3
1.4 論文架構 6
第二章 理論基礎 7
2.1 RGB色彩模型 7
2.2 色彩影像直方圖 9
2.3 極座標表示法 11
2.4 最近鄰內插法之影像旋轉 12
第三章 研究方法 15
3.1人體特徵點定義 18
3.2分層特徵點追蹤 21
3.3特徵點定位 26
3.3.1區域內的色彩影像直方圖分佈 27
3.3.2特徵點區域色彩影像直方圖分佈比對 30
3.3.3頭部定位 32
3.3.4骨架定位 34
3.3.5運動路徑平滑化 35
3.4代理人同步 37
第四章 研究結果 40
4.1 研究設備與環境 40
4.2 代理人影像結果展示 42
第五章 結論 50
參考文獻 52





圖索引
圖1-1三種人體模型示意圖 4
圖2-1 RGB 色彩模型示意圖 8
圖2-2色彩影像與R、G、B三分量之直方圖範例 10
圖2-3極座標多角度示意圖 11
圖2-4以最近鄰內插法為基礎的灰階內插示意圖 13
圖2-5未旋轉之灰階影像與用最近鄰內插法旋轉之灰階影像範例圖 14
圖3-1「運用視訊中人體特徵點追蹤與定位之代理人同步系統」方塊圖 17
圖3-2代理人之人體特徵點定義圖 19
圖3-3視訊人體物件範例圖 20
圖3-4頭部到左手腕的分層追蹤路徑示意圖 22
圖3-5頭部到右手腕的分層追蹤路徑示意圖 23
圖3-6頭部到左腳踝的分層追蹤路徑示意圖 24
圖3-7頭部到右腳踝的分層追蹤路徑示意圖 25
圖3-8特徵點定位之方塊圖 26
圖3-9權重值公式之曲線圖 29
圖3-10兩張畫格之色彩影像直方圖分佈比對範例圖 31
圖3-11頭部特徵點定位示意圖 33
圖3-12單層軀幹及四肢特徵點定位示意圖 34
圖3-13運動路徑平滑化示意圖 37
圖3-14代理人身體各部位輸入影像範例圖 38
圖3-15輸入人體運動之影像與建立出的同步代理人影像範例圖 39
圖4-1「對稱手臂運動行為」之運動過程中人體運動與對應之代理人視訊 44
圖4-2「對稱手臂運動行為」之運動過程中人體運動與對應之代理人視訊 45
圖4-3「非對稱手臂運動行為」之運動過程中人體的運動與對應之代理人視訊 47
圖4-4「非對稱手臂運動行為」之運動過程中人體的運動與對應之代理人視訊 48
圖4-5「非對稱手臂運動行為」之運動過程中人體的運動與對應之代理人視訊 49

表索引
表4-1系統之設備環境表 41
參考文獻
[1] S. X. Ju, M. J. Blacky, and Y. Yacoob, “Cardboard people: A parameterized model of articulated image motion,” Proc. of IEEE International Conference on Automatic Face and Gesture Recognition, Killington, pp. 38–44, 1996.
[2] Z. Chen and H. J. Lee, “Knowledge-guided visual perception of 3-D human gait from a singleimage sequence,” IEEE Trans. on Systems, Man and Cybernetics, vol. 22, no. 2, pp. 336-342, 1992.
[3] M. K. Leung and Y. H. Yang, “First sight: A human body outline labeling system,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 17, no. 4, pp. 359-377, 1995.
[4] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, May 2003.
[5] M. Isard and A. Blake, “Contour tracking by stochastic propagation of conditional density,” Lecture Notes in Computer Science, 1996 – Springer.
[6] R. T. Collins, “Mean-shift blob tracking through scale space,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2003.
[7] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002.
[8] D. Comaniciu and V. Ramesh, “Real-time tracking of non-rigid objects using mean shift,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 142-149, June 2000.
[9] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,” University of North Carolina at Chapel Hill, Chapel Hill, NC, 1995.
[10] S. J. Julier and J. K. Uhlmann, “A new extension of the Kalman filter to nonlinear systems,” Proc. SPIE, vol. 3068, pp. 182-193, 1997.
[11] M. S. Arulampalam, S. Maskell, N. Gordon, and T Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. on Signal Processing, vol. 50, no. 2, pp.174-189, February 2002.
[12] Z. Khan, T. Balch, and F. Dellaert, “An MCMC-based particle filter for tracking multiple interacting targets,” Proc. ECCV, Prague, May 2004.
[13] B. Stenger, P. Mendonca, and R. Cipolla, “Model based 3D tracking of an articulated hand,” CVPR, vol. 2, pp. 310-315, 2001.
[14] L. Vacchetti, V. Lepetit, and P. Fua, “Stable real-time 3D tracking using online and offline information,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 10, pp. 1385-1391, 2004.
[15] E. A. Wan and R. Van der Merwe, “The unscented Kalman filter for nonlinear estimation,” Proc. Symp. Adaptive Syst. Signal Process., Commun. Contr., Lake Louise, AB, Canada, Oct 2000.
[16] K. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive color-based particle filter,” Image and Vision Computing, vol. 21, Issue 1, pp. 99-110, January 2003.
[17] W. Qu and D. Schonfeld, “Real-time decentralized articulated motion analysis and object tracking from videos,” IEEE Trans. on Image Processing, vol. 16, no. 8, August 2007.
[18] F. Aherne, N. Thacker, and P. Rockett, “The Bhattacharyya metric as an absolute similarity measure for frequency coded data,” Kybernetika, vol. 34, no. 4, pp. 363-368, 1998.
[19] T. Kailath, “The divergence and Bhattacharyya distance measures in signal selection,” IEEE Trans. Comm. Technology, vol. 15, pp. 52-60, 1967.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔