跳到主要內容

臺灣博碩士論文加值系統

(3.235.120.150) 您好!臺灣時間:2021/07/31 15:18
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:翁琦峰
研究生(外文):Chi-Feng Wang
論文名稱:偵測視訊影片中的動作場景
論文名稱(外文):Action scene detection in video
指導教授:陳良華陳良華引用關係
指導教授(外文):Liang-Hua Chen
學位類別:碩士
校院名稱:輔仁大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:95
語文別:中文
中文關鍵詞:動作場景偵測動作活動量
外文關鍵詞:action scenemotion activityscene detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:142
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
隨著電腦的運算速度逐漸提升,數位化多媒體資料的使用率也大幅度的提升,於是相關於多媒體的研究以及運用也就相對的盛行。就多媒體影片的內容分類方面而言,如何可以準確做分類的動作也是多媒體研究中相對重要的議題。
本論文的主要目標是希望能夠自動的對於影片中的打鬥以及一些激烈畫面作出偵測的動作(Action Sequence Detection)。首先我們對影片做Shot的切割動作(Shot Change Detection),然而在利用這些切割出來的Shot資料,分別去作合併(Clustering)以及動作活動量分析(Motion Activity Analysis)。合併是對我們所切割出來的Shot Data作相似度的比較,將極度相似的shot做合併成為一個Scene。動作活動量分析是對影片中每個shot作motion vector的計算。綜合以上得到的合併鏡頭以及動作活動量分析資料,藉由Support Vector Machine來做動作場景(Action Scene)和非動作場景(Non-Action Scene)的分類。
According to the increasing speed of the computer and the utility rate of the multimedia data, the research and applications of the multimedia are more popular. The way to classify the content of the multimedia is an important issue.
The goal of this research is to detect the action scene in movies automatically. We detect the shot boundary in order to extract key-frame to cluster the visually similar shots and compute the global motion intensity for each shot. By combining the data from clustering and motion activity analysis, we use Support Vector Machine to classify the scene into action scene and non-action scene. Experimental results show that the proposed approach is promising.
第一章 研究簡介
1.1 研究動機
1.2 關於影片中的動作場景(Action scene)
1.3 動作場景(Action scene) 的問題
1.4 論文架構
第二章 相關研究
2.1 相關名詞
2.2 鏡頭邊界的偵測
2.3 影片的階層架構
2.4 動作活動量特徵
2.5 幾個場景偵測、分類的方法
第三章 我們的方法
3.1 Shot Change Detection
3.1.1 shot 邊界的種類
3.1.2 偵測shot邊界
3.1.3 偵測key-frame的選擇
3.2 Shot Clustering
3.3 Motion Activity Analysis
3.3.1 motion vector
3.3.2 motion vector計算
3.4 Action Scene Detection
3.4.1 考慮的因素
3.4.2 action scene
3.4.3 support vector machine
第四章 實驗結果
4.1 實驗數據
4.2 實驗討論
第五章 結論
第六章 參考文獻
[1]Zhang, H.J., Kankanhalli, A., and smoliar, S.W., “Automatic Partitioning of Full-motion Video”, Multimedia Systems(1993)Vol. 1, No. 1,pp.10-28

[2]Ying Li,Tong Zhang,Daniel Tretter,”An Overview of video Abstraction Techniques”Imaging Systems Laboratory HP Laboratories Palo Alto HPL-2001-191 July 31st,2001

[3]BART LEHANE, NOEL E. O’CONNOR, and NOEL MURPHY, "Action Sequence Detection In Motion Pictures",Dublin City University

[4]M. Yeug and B.-L Yeo, "Video visualization for compact presentation and fast browsing of pictorial content", IEEE Trans. Circuits Syst. Video Technol. 7, 5 (Oct. 1997), 771–785

[5]Kadir A. Peker,A. Aydin Alatan, Ali N. Akansu, "Low-level Motion Activity features for Semantic Characterization of Video",New Jersey center for Multimedia Research New Jersey Institute of Technology

[6]H.-W. Chen, J.-H. Kuo, W.-T. Chu, and J.-L. Wu, "Action Movies Segmentation and Summarization Based on Tempo Analysis", Proceedings of the ACM SIGMM International Workshop on Multimedia Information Retrieval, pp. 251-258, 2004.

[7]Kasturi, R. and Jain R., “dynamic Vision”, in Computer Vision: Principles, Kasturi R.,Jain R.,Editors,IEEE Computer Society Press, Washington, 1991

[8]Arman F, Hsu A, Chiu M-Y (1993) Feature management for large video databases.In:Proc.SPIE Storage and Retrieval for Image and Video DataBase.

[9]Zabih, R., Miller, J., and Mai, K., “ A Feature-Based Alogorithm for Detecting and Classifying Scene Breaks”, Proc. ACM Multimedia 95, San Francisco, CA, November, 1995, pp. 189-200.

[10]F. Dufaux, “Key frame selection to represent a video”, Image Processing, 2000. Proceedings. 2000 International Conference on, Volume:2,Pages:275-278 vol 2

[11]A. Girgensohn and J. Boreczky, “Time-sonstrained keyframe selection technique”, Proc. Of International Conference on Multimedia Computing and System,pp. 756-761

[12]Low-level motion activity features for semantic characterization ofvideo Peker, K.A. Alatan, A.A. Akansu, A.N. New Jersey Inst. of Technol., Newark, NJ;. This paper appears in: Multimedia and Expo, 2000. ICME 2000

[13]Chih-Jen Lin , " A Practical Guide to Support Vector Machines " , August 28,2004

[14]Chang, C.-C. and C.-J. Lin(2006).LIBSVM(2.82):a library for support vector machines.

[15]RULE-BASED SCENE EXTRACTION FROM VIDEO 0-7803-7622-6/02/s 17.00 ©2002 IEEE II - 737 IEEE ICIP 2002 Lei Chen and M Tamer Qzsu Department of Computer Science University of Waterloo Waterloo, ON, N2L3G1

[16]L. Chen, S. J. Rizvi, and M. T. zsu, "Incorporating Audio Cues into Dialog and Action Scene Extraction", Proc. of SPIE Storage and Retrieval for Media Databases, 2003.

[17]A. Yoshitaka, T. Ishii, M. Hiralawa, Content-Based Retrieval of Video Data by the Grammar of Film, IEEE Symposium on Visual Languages, 1997

[18]R. Lienhart, S.Pfeiffer, and W. Effelsberg, Scene Determination Based on Video and Audio Features. ICVIS, 1999

[19]Y. Li, S. Narayanan, C.-C. Jay Kuo, Movie Content Analysis Indexing, and Skimming, Kluwer Academic Publishers, Video Mining, Chapter 5, 2003

[20]Y Zhai, Z Rasheed, M Shah, Finite State Machines in Movie Scene Classification, Page 1. 17th International Conference on Pattern Recognition, Cambridge, UK, 2004

[21]Y Zhai, Z Rasheed, M Shah, A Framework for Semantic Classification of Scenes Using Finite State Machines, Intl Conf on Image and Video Retrieval - Springer
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文