跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.171) 您好!臺灣時間:2024/12/09 03:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳敦裕
研究生(外文):Duan-Yu Chen
論文名稱:高階視訊處理、擷取、特徵粹取及視訊結構化計算之研究
論文名稱(外文):Towards High-Level Content-Based Video Retrieval and Video Structuring
指導教授:李素瑛李素瑛引用關係
指導教授(外文):Suh-Yin Lee
學位類別:博士
校院名稱:國立交通大學
系所名稱:資訊工程系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2004
畢業學年度:93
語文別:英文
論文頁數:151
中文關鍵詞:視訊處理視訊擷取視訊特徵粹取視訊結構化計算
外文關鍵詞:video processingcontent-based video retrievalvideo feature extractionvideo structuring
相關次數:
  • 被引用被引用:0
  • 點閱點閱:225
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
隨著數位視訊在教育、娛樂、以及其它多媒體應用的發展下,造成數位視訊資料大量且迅速增加。在此情況之下,對於使用者而言,需憑藉一個有效的工具來快速且有效率地獲得所要的視訊資料。在搜尋視訊資料的方法中,對於使用者而言以內容為基礎之方法最具有高階語意意義,也最為自然且友善。因此,以視訊內容為基礎之搜尋、瀏覽以及擷取吸引各領域的學者研發各種粹取視訊資料中的高階特徵,以提供有效率地搜尋並擷取資料。但另一方面,隨著視訊資料壓縮法的成熟,愈來愈多的視訊資料以壓縮型態儲存,特別是MPEG格式。因此也吸引了愈來愈多的學者投入在壓縮的視訊資料中粹取其高階特徵之研究。本論文主旨在於研發粹取精簡且有效之視訊特徵,並達成具有語意之高階視訊資料結構化。
首先,我們在壓縮視訊資料中偵測移動物體,並提出移動物體追蹤演算法,以追蹤物體並產生物體軌跡,憑藉著物體軌跡,推測相對應之事件並產生事件之標籤,最終建立以事件為基礎之視訊資料結構化瀏覽系統。
在建立高階視訊資料結構化當中,除了視覺資料之外,文字資料亦是更具有語意意義的特徵,因此我們也提出了在壓縮視訊資料當中偵測文字字幕,並利用字幕的長時間出現特性作為濾除雜訊之基礎以及文字字幕其梯度能量較高之特性,以此獲得有意義的文字字幕,提供具語意之視訊結構化之計算。
為了提供有效的視訊資料相似性的比對,以利視訊資料擷取,我們也提出了兩個以移動物體為基礎之高階特徵(T2D-Histogram Descriptor以及Temporal MIMB Moments Descriptor)。與傳統方法在粹取視訊資料特徵僅考慮空間特性不同,我們所提出的兩個descriptor利用了視訊資料之空間以及時間的特性。我們以Discrete Cosine Transform之能量集中之特性,將各個影格之空間特性作為連結,並大幅降低特徵值之資料量,達到高階視訊特徵精簡化但視訊資料相似性比對高效率的目的。
我們進行了大規模完整的實驗以評估所提各方法的效能。在我們的實驗範圍中,結果顯示,對於眾多的測試視訊資料,我們的視訊資料相似性比對的方法都優於許多著名的方法。
With the increasing digital videos in education, entertainment and other multimedia applications, there is an urgent demand for tools that allow an efficient way for users to acquire desired video data. Content-based searching, browsing and retrieval is more natural, friendly and semantically meaningful to users. With the technique of video compression getting mature, lots of videos are being stored in compressed form and accordingly more and more researches focus on the feature extractions in compressed videos especially in MPEG format. This thesis aims to investigate high-level semantic video features in compressed domain for efficient video retrieval and video browsing.
We propose an approach for video abstraction to generate semantically meaningful video clips and associated metadata. Based on the concept of long-term consistency of spatial-temporal relationship between objects in consecutive P-frames, the algorithm of multi-object tracking is designed to locate the objects and to generate the trajectory of each object without size constraint. Utilizing the object trajectory coupled with domain knowledge, the event inference module detects and identifies the events in the application of tennis sports. Consequently, the event information and metadata of associated video clips are extracted and the abstraction of video streams is accomplished.
A novel mechanism is proposed to automatically parse sports videos in compressed domain and then to construct a concise table of video content employing the superimposed closed captions and the semantic classes of video shots. The efficient approach of closed caption localization is proposed to first detect caption frames in meaningful shots. Then caption frames instead of every frame are selected as targets for detecting closed captions based on long-term consistency without size constraint. Besides, in order to support discriminate captions of interest automatically, a novel tool – font size detector is proposed to recognize the font size of closed captions using compressed data in MPEG videos.
For effective video retrieval, we propose a high-level motion activity descriptor, object-based transformed 2D-histogram (T2D-Histogram), which exploits both spatial and temporal features to characterize video sequences in a semantics-based manner. The Discrete Cosine Transform (DCT) is applied to convert the object-based 2D-histogram sequences from the time domain to the frequency domain. Using this transform, the original high-dimensional time domain features used to represent successive frames are significantly reduced to a set of low-dimensional features in frequency domain. The energy concentration property of DCT allows us to use only a few DCT coefficients to effectively capture the variations of moving objects. Having the efficient scheme for video representation, one can perform video retrieval in an accurate and efficient way.
Furthermore, we propose a high-level compact motion-pattern descriptor, temporal motion intensity of moving blobs (MIMB) moments, which exploits both spatial invariants and temporal features to characterize video sequences. The energy concentration property of DCT allows us to use only a few DCT coefficients to precisely capture the variations of moving blobs. Compared to the motion activity descriptors, RLD and SAH, of MPEG-7, the proposed descriptor yield 40% and 21 % average performance gains over RLD and SAH, respectively.
Comprehensive experiments have been conducted to assess the performance of the proposed methods. The empirical results show that these methods outperform state-of-the-art methods with respective various datasets of different characteristics.
摘要………………………………………………………………………………….....i
Abstract………………………………………………………………………………iii
誌謝………………………………………………………………………………..….vi
Contents……………………………………………………………………………..vii
List of Figures………………………………………………………………………..xi
List of Tables…………………………………………………..…………………….xv
Chapter 1. Introduction……………………………………………………………...1
Chapter 2. Automatic Content Parsing and Semantic Event Identification for Sports Video Abstraction and Description
2.1 Introduction……………………………………………………………………..4
2.2 Overview of The System Architecture………………………………………….7
2.3 Video Segmentation and Shots Selection……………………………………….9
2.3.1 GOP-Based Video Segmentation…………………………………………9
2.3.2 Scene Identification……………………………………………………...10
2.4 Camera Motion Compensation………………………………………………...12
2.4.1 Adaptive Threshold Decision…………………………………………..12
2.4.2 Camera Motion Estimation………………………………………….....13
2.5 Events Detection and Description……………………………………………..16
2.5.1 Object Tracking Algorithm…………………………………………….17
2.5.1.1 Object Localization……………………………………………17
2.5.1.2 Object Tracking Forward and Backward………………………21
2.5.2 Events Inference Model………………………………………………...23
2.5.3 Event Description Scheme……………………………………………..28
2.6 Experimental Results and Discussion…………………………………………29
2.7 Summary………………………………………………………………………35
Chapter 3. Automatic Closed Caption Detection and Filtering in MPEG Videos for Video Structuring………………………………………………….38
3.1 Introduction……………………………………………………………………38
3.2 Shot Identification……………………………………………………………..41
3.2.1 Video Segmentation……………………………………………………..41
3.2.2 Shot Identification……………………………………………………….41
3.3 Closed Caption Localization…………………………………………………..44
3.3.1 Caption Frame Detection………………………………………………..45
3.3.2 Closed Caption Localization………………………………………….…48
3.3.3 Font Size Differentiation…………………………………………….…..52
3.4 Experimental Results and Visualization System………………………………57
3.4.1 Experimental Results………………………………………….…………57
3.4.2 The Prototype System of Video Content Visualization………….………63
3.5 Summary………………………………………………………………………66
Chapter 4. Motion Activity Based Shot Identification and Closed Caption Detection for Volleyball Video Structuring…………………………..67
4.1 Introduction……………………………………………………………………67
4.2 Video Segmentation…………………………………………………………...70
4.3 Shot Identification……………………………………………………………..71
4.3.1 Moving Object Detection………………………………………………..71
4.3.2 Motion Activity Descriptor – 2D Histogram…………………………….73
4.3.3 Shot Identification Algorithm……………………………………………75
4.4 Closed Caption Localization…………………………………………………..78
4.4.1 Localization of Superimposed Closed Captions………………………...78
4.4.2 Clustering-Based Noise Filtering………………………………………..81
4.5 Experimental Results and Analysis……………………………………………83
4.6 Summary………………………………………………………………………88
Chapter 5. Robust Video Sequence Retrieval Using A Novel Object-Based T2D-Histogram Descriptor………………………………………………90
5.1 Introduction……………………………………………………………………90
5.2 Overview of the Proposed Scheme……………………………………………92
5.3 Characterization of Video Segments…………………………………………..93
5.3.1 Moving Object Detection………………………………………………..94
5.3.2 Describing Motion Activity in a Video Segment………………………..95
5.4 Video Sequence Matching……………………………………………………..97
5.4.1 Discrete Cosine Transform………………………………………………97
5.4.2 Representation of Video Sequences……………………………………..97
5.4.3 Choice of Similarity Measure………………………………………….100
5.5 Experimental Results and Discussions……………………………………….102
5.5.1 Selecting Appropriate Number of DCT Coefficients…………………..103
5.5.2 Choosing an Appropriate Motion Activity Descriptor…………………106
5.5.3 Determining the Best Number of Histogram Bins……………………..108
5.5.4 Evaluation of Retrieval Performance…………………………………..109
5.6 Summary……………………………………………………………………..114
Chapter 6. Robust Video Similarity Retrieval Using Temporal MIMB Moments………………………………………………………………115
6.1 Introduction…………………………………………………………………..115
6.2 Characterization of Video Segments…………………………………………117
6.2.1 Detecting Moving Blobs in MPEG Videos…………………………….117
6.2.2 MIMB Moments………………………………………………………..118
6.2.3 Representing Temporal Variations of MIMB Moments………………..119
6.3 Experimental Results…………………………………………………………120
6.3.1 Choice of Similarity Measure………………………………………….120
6.3.2 Evaluation of Retrieval Performance…………………………………..121
6.4 Summary……………………………………………………………………..123
Chapter 7. Conclusions and Future Work……………………………………….124
7.1 Conclusions…………………………………………………………………..124
7.2 Future Work…………………………………………………………………..124
Reference…………………………………………………………………………...127
[1] ISO/IEC JTC1/SC29/WG11/N3913, “Study of CD 15938-3 MPEG-7 Multimedia Content Description Interface – Part 3 Visual,” Pisa, January 2001.
[2] N. Babaguchi, Y. Kawai and T. Kitahashi, “Event Based Indexing of Broadcasted Sports Video by Intermodal Collaboration,” IEEE Transactions on Multimedia, Vol. 4, No. 1, March 2002.
[3] Y. Gong, L. T. Sin, C. H. Chuan, H. Zhang and M. Sakauchi, “Automatic Parsing of TV Soccer Programs,” Proc. International Conference on Multimedia Computing and Systems,” pp. 167-174, May 1995.
[4] G. Sudhir, John C. M. Lee and Anil K. Jain, “Automatic Classification of Tennis Video for High-Level Content-based Retrieval,” Proc. IEEE International Workshop Content-Based Access of Image and Video Database, 1998, pp. 81-90.
[5] G. S. Pingali, Y. Jean and I. Carlbom, “Real Time Tracking for Enhanced Tennis Broadcasts,” Proc. IEEE Computer Society Conference Computer Vision and Pattern Recognition, 1998, pp. 260-265.
[6] H. Miyamori and S. I. Iisaku, “Video Annotation for Content-based Retrieval using Human Behavior Analysis and Domain Knowledge,” Proc. Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 320–325.
[7] N. Haering, R. J. Qian, and M. I. Sezan, “A Semantic Event-Detection Approach and Its Application to Detecting Hunts in Wildlife Video,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10, No. 6, September 2000, pp. 857-868.
[8] H. L. Eng, and K. K. Ma, “Bidirectional Motion Tracking for Video Indexing,” Proc. 3rd IEEE Workshop on Multimedia Signal Processing, 1999, pp. 153-158.
[9] L. Favalli, A. Mecocci, and F. Moschetti, “Object Tracking for Retrieval Applications in MPEG-2,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10, No. 3, April 2000, pp. 427-432.
[10] D. Y. Chen and Suh-Yin Lee, “Motion-Based Semantic Event Detection for Video Content Descriptions in MPEG-7,” Proc. 2nd IEEE Pacific Rim Conference on Multimedia, Beijing, China, Oct. 2001, pp. 110-117.
[11] M. Lee, and B. Ahn, “Robust Algorithm for Scene Analysis on Compressed Video,” Proc. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, 1999, pp. 103-106.
[12] J. Nang, S. Hong, and Y. Ihm, “An Efficient Video Segmentation Scheme for MPEG Video Stream using Macroblock Information,” Proc. ACM Multimedia Orlando, USA, 1999, pp. 23-26.
[13] S. C. Pei, and Y. Z. Chou, “Efficient MPEG Compressed Video Analysis Using Macroblock Type Information,” IEEE Transactions on Multimedia, Vol. 1, No. 4, December 1999, pp. 321-333.
[14] R. Wang, and T. Huang, “Fast Camera Motion Analysis in MPEG domain,” Proc. IEEE International Conference on Image Processing, 1999, Vol. 3, pp. 691-694.
[15] Y. P. Tan, D. D. Saur, S. R. Kulkarni, and P. J. Ramadge, “Rapid Estimation of Camera Motion from Compressed Video with Application to Video Annotation,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10, No. 1, February 2000, pp. 133-146.
[16] S. Y. Lee, J. L. Lian and D. Y. Chen, “Video Summary and Browsing Based on Story-Unit for Video-on-Demand Service,” Proc. 3rd International Conference on Information, Communications, and Signal Processing, Singapore, Oct. 2001.
[17] ISO/IEC JTC1/SC29/WG11/N3964, “MPEG-7 Multimedia Description Schemes XM (v7.0),” Singapore, March 2001.
[18] http://www.itftennis.com/html/rule/
[19] Coding of Moving Pictures and Associated Audio-for Digital Storage Media at up to about 1.5Mbit/s, Committee Draft of Standard ISO11172: ISO/MPEG 90/176, November 1991.
[20] J. L. Mitchell, W. B. Pennebaker, Chad E.Fogg, and Didier J. LeGall, “MPEG VIDEO COMPRESSION STANDARD,” Chapman&Hall, NY, USA, 1997.
[21] J. Meng, Y. Juan, S.F. Chang, “Scene Change Detection in a MPEG Compressed Video Sequence,” Proc. IS&T/SPIE, Vol. 2419, pp.14-25, 1995.
[22] D. Y. Chen, S. J. Lin and Suh-Yin Lee, “Motion Activity Based Shot Identification and Closed Caption Detection for Video Structuring,” Proc. 5th International Conference on Visual Information System, pp. 288-301, March, 2002.
[23] D. Y. Chen, M. H. Hsiao and Suh-Yin Lee, “Automatic Closed Caption Detection and Font Size Differentiation in MPEG Videos,” Proc. 5th International Conference on Visual Information System, pp. 276-287, March 2002.
[24] S. W. Lee, Y. M. Kim and S. W. Choi, “Fast Scene Change Detection using Direct Feature Extraction from MPEG Compressed Videos,” IEEE Transactions on Multimedia, Vol. 2, No. 4, pp. 240-254, Dec. 2000.
[25] J. Nang, O. Kwon and S. Hong, “Caption Processing for MPEG Video in MC-DCT Compressed Domain,” Proc of ACM Multimedia Workshop, pp. 211-214, 2000.
[26] H. Wang and S. F. Chang, “A Highly Efficient System for Automatic Face Region Detection in MPEG Video,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 7, No. 4, pp. 615-628, August 1997.
[27] H. Luo and A. Eleftheriadis, “On Face Detection in the Compressed Domain,” Proc. of ACM Multimedia, pp. 285-294, 2000.
[28] H. J. Zhang, C. Y. Low, S. W. Smoliar and J. H. Wu, “Video Parsing and Browsing Using Compressed Data,” Multimedia Tools and Applications, pp. 89-111, 1995.
[29] Y. Zhong, H. Zhang and A. K. Jain, “Automatic Caption Localization in Compressed Video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 4, pp. 385-392, April 2000.
[30] Y. Zhang and T. S. Chua, “Detection of Text Captions in Compressed Domain Video,” Proc. of ACM Multimedia Workshop, pp. 201-204, 2000.
[31] X. Chen and H. Zhang, “Text Area Detection from Video Frames,” Proc. of 2nd IEEE Pacific Rim Conference on Multimedia, pp. 222-228, Oct. 2001.
[32] H. Li, D. Doermann and O. Kia, “Automatic Text Detection and Tracking in Digital Video,” IEEE Transactions on Image Processing, Vol. 9, No. 1, pp. 147-156, Jan. 2000.
[33] J. C. Shim, C. Dorai and R. Bollee, “Automatic Text Extraction from Video for Content-Based Annotation and Retrieval,” Proc. 14th International Conference on Pattern Recognition, pp. 618-620, 1998.
[34] J. Ohya, A. Shio and S. Akamastsu, “Recognizing Characters in Scene Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 2, pp. 214-220, February 1994.
[35] U. Gargi, S. Antani and R. Kasturi, “Indexing Text Events in Digital Video Databases,” Proc. 14th International Conference on Pattern Recognition, pp. 916-918, 1998.
[36] S. Kannangara, E. Asbun, R. X. Browning and E. J. Delp, “The Use of Nonlinear Filtering in Automatic Video Title Capture,” Proc. IEEE/EURASIP Workshop on Nonlinear Signal and Image Processing, 1997.
[37] V. Wu, R. Manmatha and E. M. Riseman, “TextFinder: An Automatic System to Detect and Recognize Text in Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 11, pp. 1224-1229, November 1999.
[38] A. K. Jain and B. Yu, “Automatic Text Location in Images and Video Frames,” Patter Recognition, Vol. 31, No. 12, pp. 2055-2076, 1998.
[39] S. W. Lee, D. J. Lee and H. S. Park, “A New Methodology for Grayscale Character Segmentation and Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 18, No. 10, pp. 1045-1050, Oct. 1996.
[40] W. Qi, L. Gu, H. Jiang, X. R. Chen and H. J. Zhang, “Integrating Visual, Audio and Text Analysis for News Video,” Proc. International Conference on Image Processing, Vol. 3, pp. 520-523, 2000.
[41] D. Chen, K. Shearer and H. Bourlard, “Text Enhancement with Asymmetric Filter for Video OCR,” Proc. 11th International Conference on Image Analysis and Processing, pp. 192-197, Sep. 2001.
[42] T. Sato, T. Kanade, E. K. Hughes and M. A. Smith, “Video OCR for Digital News Archive,” Proc. IEEE International Workshop on Content-Based Access of Image and Video Database, pp. 52-60, Jan. 1998.
[43] Y. Ariki and K. Matsuura, “Automatic Classification of TV News Articles Based on Telop Character Recognition,” Proc. IEEE International Conference on , pp. 148-152, 1999.
[44] S. W. Lee and D. S. Ryu, “Parameter-Free Geometric Document Layout Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 11, pp. 1240-1256, November 2001.
[45] R. G. Casey and E. Lecolinet, “A Survey of Methods and Strategies in Character Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence,” Vol. 18, No. 7, pp. 690-706, July 1996.
[46] W. Qi, L. Gu, H. Jiang, X. R. Chen and H. J. Zhang, “Integrating Visual, Audio and Text Analysis for News Video,” Proc. International Conference on Image Processing, Vol. 3, pp. 520-523, 2000.
[47] H. Lu and Y. P. Tan, “Sports Video Analysis and Structuring,” Proc. IEEE 4th Workshop on Multimedia Signal Processing, pp.45-50, 2001.
[48] Y. M. Kwon, C. J. Song and I. J. Kim, “A New Approach for High Level Video Structuring,” Proc. IEEE International Conference on Multimedia and Expo., Vol. 2, pp. 773-776, 2000.
[49] A. Hanjalic and R. L. Lagendijk, “Automated High-Level Movie Segmentation for Advanced Video-Retrieval Systems,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 9, No. 4, pp. 580-588, June 1999.
[50] M. M. Yeung and B. L. Yeo, “Video Visualization for Compact Presentation and Fast Browsing of Pictorial Content,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 7, No. 5, pp. 771-785, Oct. 1997.
[51] D. Y. Chen, H. T. Chen and S. Y. Lee, “Motion Activity Based Semantic Video Similarity Retrieval,” Proc. IEEE 3rd Pacific Rim Conference on Multimedia, pp. 319-327, Hsinchu, Taiwan, Dec 2002.
[52] T. Kohonen, “The Self-Organizing Map,” Proceedings of IEEE, 78: 1464-1480, 1990.
[53] T. Sikora, “The MPEG-7 Visual Standard for Content Description – An Overview,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 11, No. 6, pp. 696 –702, June 2001.
[54] A. Divakaran, K. Peker and H. Sun, “A Region Based Descriptor for Spatial Distribution of Motion Activity for Compressed Video,” Proc. International Conference on Image Processing, Vol. 2, pp. 287-290, Sep. 2000.
[55] S. Jeannin and A. Divakaran, “MPEG-7 Visual Motion Descriptors,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 11, No. 6, pp. 720-724, June 2001.
[56] Z. Aghbari, K. Kaneko and A. Makinouchi, “A Motion-Location Based Indexing Method for Retrieving MPEG Videos,” Proc. 9th International Workshop on Database and Expert Systems Applications, pp. 102-107, Aug. 1998.
[57] K. A. Peker, A. A. Alatan and A. N. Akansu, “Low-Level Motion Activity Features for Semantic Characterization of Video,” Proc. IEEE International Conference on Image Processing, Vol. 2, pp 801-804, Sep. 2000.
[58] R. Wang, H. J. Zhang and Y. Q. Zhang, “A Confidence Measure Based Moving Object Extraction System Built for Compressed Domain,” Proc. IEEE International Symposium on Circuits and Systems, Vol. 5, pp. 21-24, May 2000.
[59] R. Wang, M. R. Naphade, and T. S. Huang, “Video Retrieval and Relevance Feedback in The Context of A Post-Integration Model,” Proc. IEEE 4th Workshop on Multimedia Signal Processing, pp. 33-38, Oct. 2001.
[60] T. Lin, C. W. Ngo, H. J. Zhang and Q. Y. Shi, “Integrating Color and Spatial Features for Content-Based Video Retrieval,” Proc. IEEE International Conference on Image Processing, Vol. 2, pp. 592-595, Oct. 2001.
[61] S. S. Cheung and A. Zakhor, “Video Similarity Detection with Video Signature Clustering,” Proc. IEEE International Conference on Image Processing, Vol. 2, pp. 649–652, Sep. 2001.
[62] L. Agnihotri and N. Dimitrova, “Video Clustering Using SuperHistograms in Large Archives,” Proc. 4th International Conference on Visual Information Systems, pp. 62-73, Lyon, France, November 2000.
[63] M. Roach, J. S. Mason and M. Pawlewski, “Motion-Based Classification of Cartoons,” Proc. International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 146-149, Hong Kong, May 2001.
[64] L. Zhao, W. Qi, S. Z. Li, S. Q. Yang and H. J. Zhang, “Content-based Retrieval of Video Shot Using the Improved Nearest Feature Line Method,” Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 3, pp. 1625-1628, 2001.
[65] B. S. Manjunath, J. R. Ohm, V. V. Vasudevan and A. Yamada, “Color and Texture Descriptors,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 11, No. 6, pp. 703-715, June 2001.
[66] R. Mohan, “Video Sequence Matching,” IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 6, pp. 3697-3700, May 1998.
[67] M. M. Yeung and B. Liu, “Efficient Matching and Clustering of Video Shots,” Proc. IEEE International Conference on Image Processing, Vol. 1, pp. 338-341, Oct. 1995.
[68] Y. Q. Shi and H. Sun, Image and Video Compression for Multimedia Engineering. CRC Press, New York, 2000.
[69] ISO/IEC JTC1/SC29/WG11/N2466, “Licensing Agreement for the MPEG-7 Content Set,” Atlantic City, USA, October 1998.
[70] ISO/IEC JTC1/SC29/WG11/N4547, “Extraction and Use of MPEG-7 Descriptions,” Pattaya, December 2001.
[71] Ahmad, A.M.A., D. Y. Chen and Suh-Yin Lee, “Robust Object Detection Using Cascade Filter in MPEG Videos,” Proc. IEEE 5th International Symposium on Multimedia Software Engineering, pp. 196-203, Taichung, Taiwan, Dec 2003.
[72] M. Hu, “Visual Pattern Recognition by Moment Invariants,” IRE Transactions on Information Theory, Vol. IT-8, pp. 179-187, Feb. 1962.
電子全文 電子全文(限國圖所屬電腦使用)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
1. 林明傑、張晏綾、陳英明、沈勝昂,〈性侵害犯罪加害人之處遇--較佳方案及三個爭議方案〉,《月旦法學雜誌》第九十六期,二00三年五月。
2. 林山田,〈論防制犯罪的對策〉,《輔仁法學》第九期,七十九年六月。
3. 林明傑、張晏綾、陳英明、沈勝昂,〈性侵害犯罪加害人之處遇--較佳方案及三個爭議方案〉,《月旦法學雜誌》第九十六期,二00三年五月。
4. 林山田,〈評一九九九年的刑法修正〉,《月旦法學雜誌》第五十一期,一九九九年八月。
5. 林山田,〈評一九九九年的刑法修正〉,《月旦法學雜誌》第五十一期,一九九九年八月。
6. 林山田,〈論防制犯罪的對策〉,《輔仁法學》第九期,七十九年六月。
7. 吳坤山,〈刑法上妨害性自主罪章之評析〉,《刑事法雜誌》第四十四卷第一期,八十九年二月。
8. 吳坤山,〈刑法上妨害性自主罪章之評析〉,《刑事法雜誌》第四十四卷第一期,八十九年二月。
9. 李聖傑,〈從性自主權思考刑法的性行為〉,《中原財經法學》第十期,二00三年六月。
10. 李聖傑,〈從性自主權思考刑法的性行為〉,《中原財經法學》第十期,二00三年六月。
11. 王如玄,〈性侵害案件法律規範之修訂與過程〉,《律師雜誌》第二一二期,八十六年五月。
12. 王如玄,〈性侵害案件法律規範之修訂與過程〉,《律師雜誌》第二一二期,八十六年五月。
13. 尤美女,〈性侵害案件改公訴罪,已是時候〉,《全國律師》一九九八年七月號。
14. 尤美女,〈性侵害案件改公訴罪,已是時候〉,《全國律師》一九九八年七月號。
15. 林芳玫,〈彩色與無色:女性主義者多重身分的衝突與對話〉,《歷史月刊》二00三年九月號。