跳到主要內容

臺灣博碩士論文加值系統

(18.207.132.116) 您好!臺灣時間:2021/07/29 21:44
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:李靜瑋
研究生(外文):Ching-Wei Lee
論文名稱:運用運動特徵之統計特性進行視訊內容分類
論文名稱(外文):Statistical Motion Characterization for Video Content Classifiaction
指導教授:許秋婷許秋婷引用關係
指導教授(外文):Chiou-Ting Hsu
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2004
畢業學年度:92
語文別:英文
論文頁數:59
中文關鍵詞:最大相似法統計模組視訊影片分類
外文關鍵詞:maximum likelihood estimationstatistical modelingvideo classification
相關次數:
  • 被引用被引用:0
  • 點閱點閱:125
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在這篇論文中,提出了一個描述視訊影片中動態內容的方法。這個方法並不需要任何事先的物件切割,或是完整的運動向量運算。為了達到這個目的,我們先計算出每一個像素點的移動量值與移動方向,再用三個單一 Gibbs 模組來分別表示不同的運動分佈情形。這些運動分佈包括:運動量值沿著時間軸上的分佈,運動量值在空間上的結構,以及運動方向在空間上的分佈。接著,我們利用最大相似法來計算用來定義這三個單一 Gibbs 模組的 potential值。此外,我們更進一步地將這三個單一 Gibbs 模組做結合,並且得到四個複合式 Gibbs 模組來更完整地表示視訊影片中的動態內容。為了證明所提出方法的效能,我們將這些運動模組應用於視訊內容的分類。而從實驗結果可以顯示出利用複合式的模組可以比使用單一模組能得到更好的分類結果。
In this thesis, we aim to propose an interpretation of dynamic contents of video clips without any prior motion segmentation or complete motion estimation. To this end, we estimate the motion magnitudes and motion directions from the pixelwise normal flow and utilize three single Gibbs models to represent the motion distributions respectively: motion magnitude distributions along temporal domain, spatial structures of motion magnitude and spatial structures of motion direction. We measure the potential values of the three single Gibbs models by maximum likelihood criterion. In addition, in order to characterize dynamic contents in terms of the three Gibbs models, we combine the three single Gibbs models and obtain four composite Gibbs models. To demonstrate the effectiveness of the proposed models, we have applied the motion models for the application of video content classification. Experimental results show that using composite models achieves better performance than single models.
[1] M. Gelgon and P. Bouthemy, “Determining a structure spatio-temporal representation of video content for efficient visualization and indexing,” Proc. ECCV, vol. 1, LNCS 1406, pp. 595-609, June 1998.
[2] C. T. Hsu and S. J. Teng, “Motion trajectory based video indexing and retrieval,” Proc. ICIP 2002.
[3] F. I. Bashir, A. A. Khokhar, and D. Schonfeld, “Segmented trajectory based indexing and retrieval of video data,” Proc. ICIP, vol. 2, pp. 14-17, Sept. 2003.
[4] C. M. Lu, and N. J. Ferrier, “Repetitive motion analysis: segmentation and event classification,” IEEE Trans. Pattern Anal. Machine Intell., vol. 26, no. 2, pp. 258-263, Feb. 2004.
[5] W. Zeng, W. Gao, and D. Zhao, “Video indexing by motion activity maps,” Proc. ICIP, 2002.
[6] C. H. Peh, and L. F. Cheong, “Synergizing spatial and temporal texture,” IEEE Trans. Image Processing, vol. 11, no. 10, pp. 1179-1191, Oct. 2002.
[7] R. Fablet, P. Bouthemy, and P. Pérez, “Nonparametric motion characterization using causal probabilistic models for video indexing and retrieval,” IEEE Trans. Image Processing, vol. 11, no. 4, pp. 393-407, April 2002.
[8] K. Otsuka, T. Horikoshi, S. Suzuki, and M. Fujii, “Feature extraction of temporal texture based on spatiotemporal motion trajectory,” Proc. ICPR, pp. 1047-1051, Aug. 1998.
[9] K. Otsuka, T. Horikoshi, and S. Suzuki, “Image velocity estimation from trajectory surface in spatiotemporal space,” Proc. CVPR, pp. 200-205, 1997.
[10] R. Nelson and R. Polana, “Qualitative recognition of motion using temporal texture,” Comput. Vis. Graph. Image Processing, vol. 56, no. 1, pp. 78-99, 1992.
[11] R. Fablet and P. Bouthemy, “Motion recognition using nonparametic image motion models estimated from temporal and multiscale co-occurrence statistics,” IEEE Trans. Pattern Anal. Machine Intell., vol. 25, no. 12, Dec. 2003.
[12] Y. Wang, J. Ostermann, and Y. Q. Zhang, Video processing and communications, Prentice Hall, 2002.
[13] Y. T. Tse, and R. L. Baker, “Global zoom/ pan estimation and compensation for video compression,” Proc. ICASSP, vol. 4, pp. 2725-2728, April 1991.
[14] A. M. Tekalp, digital video processing. Prentice Hall, 1995.
[15] B. Horn and B. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185-203, 1981.
[16] S. Z. Li, Markov random field modeling in computer vision, Springer-Verlag, 1995.
[17] J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. Royal Stat. Soc. B, vol. 36, no. 2, pp. 192-236, 1974.
[18] G. L. Gimel’Farb, “Texture modeling by multiple pairwise pixel interactions,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, no. 11, Nov. 1996.
[19] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” Int. J. Comput. Vis., vol. 12, no. 1, pp. 43-77, 1994.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top