(3.238.7.202) 您好!臺灣時間:2021/03/03 23:56
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:劉成祥
研究生(外文):Cheng-Hsiang Liu
論文名稱:利用視訊資料作人體走勢分析
論文名稱(外文):Human Gait Classification Using Video Information
指導教授:范國清范國清引用關係
指導教授(外文):Kuo-Chin Fan
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:中文
論文頁數:74
中文關鍵詞:追蹤偵測視訊監控走勢生物特徵
外文關鍵詞:trackingdetectionvideo surveillancegaitbiometric
相關次數:
  • 被引用被引用:2
  • 點閱點閱:109
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0

在過去數十年裡,大部分監控環境都使用閉路電視系統,其主要功能只是消極的錄影存證,並不能主動提供當時錄影的偵測資訊,喪失很多破案的最佳契機。然而隨著數位化影音技術的進步,及大量資料儲存之價格降低和光學攝影器材成本下降,使得視訊訊號數位化處理更為普及化,加上人工智慧技術日趨成熟,使智慧型視訊監控系統(intelligent video surveillance and monitoring, VASM)更為大眾所矚目,更重要的是智慧型監控系統相較於傳統監控系統更切合大眾的需求。因此,智慧型監控系統將逐漸在大樓和校園安全系統上扮演著重要角色。
在傳統的視訊監控系統中偏向於移動目標物偵測、追蹤與行為分析的研究,並未繼續深入探討追蹤物體身份識別的問題,因而我們希望能夠結合生物特徵識別等技術,增加傳統視訊監控系統中,分析和辨識視訊中人物的功能,而生物特徵識別的技術乃利用人類的獨特生理或行為特性,用以確認每個人身份。在本論文中,我們將發展一套智慧型監控與辨識系統,用以配合指紋、掌紋、臉部、手勢…等生物特徵。
在提出的視訊監控系統中,首先偵測動態影像中移動目標物的位置與追蹤軌跡之外,並分析目標物的行為模式和走路姿勢,所以將以人類走勢特徵作為研究的對象。原先我們希望設計一套系統可以用於室內、環境控制下且可以偵測追蹤動態影像中人們位置,並取得該使用者的生物特徵加以分析和辨識。進而將此系統擴大運用於戶外較複雜環境中,對移動目標物的監控與識別。實驗結果證實我們提出整合系統的可行性。


The closed circuit television(CCT) has been used instead of human eyes in the past few decades. The main function is only on the recording of events. A caretaker who has to watch ten or twenty monitors attentively and simultaneously day and night to prevent illegal entries. It is an intensive burden work for human being to carry on. Recently, computer-based cameras have been widely used because of the low cost and vast storage. More importantly, the technologies artificial intelligence, video processing, and pattern recognition have been successfully developed for digital video signals. Thus, an intelligent video surveillance and monitoring (VSAM)system gradually becomes the key role in the security systems of buildings, companies or campus.
Conventionally, motion detection, target tracking, and target classification are the main research topics in many constructed VSAM systems. However, the ultimate goal of surveillance systems is to identify the objects like the ID of individuals. In this thesis, we will develop an intelligent VSAM system to increase the identification power by using the biometrics features, such as fingerprint, palm-print, face, gesture, etc.
In the proposed system, the individuals are first identified in the video streams. Motion detection and target tracking are then accomplished. Last, the target classification of persons is achieved by using the biometric gait features. The system is first implemented in indoor and controlled environment. Then, this system is extended to the complex environments such as outdoor with the clutter background. Experimental results verify the validity of the proposed system.


Abstracti
摘要ii
目錄3
附圖目錄4
表格目錄5
第一章 緒論6
1.1 研究動機6
1.2 相關研究7
1.3 系統流程10
1.4 論文架構12
第二章 移動目標物偵測與追蹤13
2.1 目標物偵測14
2.2 目標物追蹤17
2.3 陰影問題19
第三章 光流偵測技術25
3.1 光流偵測26
3.2 子像素區塊比對27
第四章 人體走勢分析與分類32
4.1 走勢特徵抽取33
4.2 隱藏馬可夫模型37
第五章 實驗結果46
5.1 視訊監控48
5.2走勢分析59
第六章 結論與未來工作66
6.1結論66
6.2未來工作67
參考文獻68


[1]Wren, Christopher R., Azarbayejani, Ali, Darrell, Trevor, Pentland, Alex, “Pfinder: Real-Time Tracking of the Human Body”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 780-785, July 1997.[2]I. Haritaoglu, D. Harwood, L. S. Davis, ”W4:Who? When? Where? What? A Real-Time System for Detecting and Tracking People”, Proc. International Conference on Face and Gesture Recognition, April, 14-16, 1998. [3]R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, ”A System for Video Surveillance and Monitoring”, Tech. Rep., The Robotics Institute, Carnegie Mellon University, 2000. CMU-RI-TR-00-12.[4]J. Batista, P. Peixoto, P. Araujo, “Real-Time Vergence and Binocular Gaze Control”, IRSO907-IEEE/RS Int. Conf. On Intelligent Robots and Systems, Grenoble, France, September, 1997.[5]Liang Zhao, Charles E. Thorpe, "Stereo- and Neural Network-Based Pedestrian Detection", IEEE Transactions ON Intelligent Transportation Systems, VOL. 1, NO. 3, SEPTEMBER 2000[6]Michael Oren, Constantine Papageorgiou, Pawan Sinha, Edgar Osuna, Tomaso Poggio, “Pedestrian Detection Using Wavelet Templates”, Appears in CVPR 97, June 17-19, Puerto Rico.[7]C. Anderson, P. Burt, G. V. D. Wal, “Change detection and tracking using pyramid transformation techniques”, In Proc. Of SPIE-Intelligent Robics and Computer Vision, Vol. 579, pp. 72-78, 1985.[8]James Black, Dr Tim Ellis, “Multi Camera Image Tracking”, Proceedings 2nd IEEE Int. Workshop on PETS, Kauai, Hawaii, USA, December 9 2001.[9]Quming Zhou, J. K. Aggarwal, "Tracking and Classifying Moving Objects from Video", Proceedings 2nd IEEE Int. Workshop on PETS, Kauai, Hawaii, USA, December 9 2001.[10]C. Anderson, Peter Burt, G. van der Wal, “Change detection and tracking using pyramid transformation techniques”, In Proceedings of SPIE - Intelligent Robots and Computer Vision, volume 579, pages 72–78, 1985.[11]J. Barron, D. Fleet, S. Beauchemin, “Performance of optical flow techniques”, International Journal of Computer Vision, vol. 12, no. 1, pp. 42-77, 1994.[12]Stephen J. McKenna, Sumer Jabri, Zoran Duric and Harry Wechsler. “Tracking Interacting People”, Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, 2000 Page(s): 348 -353.[13]Greg Welch, Gary Bishop, “An Introduction to the Kalman Filter”, TR 95-041 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3175 Updated: Thursday, February 8, 2001.[14]Luis M. Fuentes, Sergio A. Velastin, “People tracking in surveillance applications”, Proceedings 2nd IEEE Int. Workshop on PETS, Kauai, Hawaii, USA, December 9 2001.[15]T. Horprasert, D. Harwood, L. S. Davis, “A statistical approach for real-time robust background subtraction”, In IEEE ICCV’99 FrameRate Workshop, 1999.[16]L. R. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, Proceedings of the IEEE, vol. 77, no. 2, pp.257-286, Feb. 1989.[17]Jim Little, Jeffrey E. Boyd, “Recognizing People by Their Gait: the Shape of Motion”, Department of Computer Science, University of British Columbia Vancouver B.C., Canada V6T 1Z4, Department of Electrical and Computer Engineering University of California, La Jolla, CA 92093-0407[18]Dorthe Meyer, Heinrich Niemann, ”Features for Optical Flow Based Gait Classification Using HMMs”, University of Erlangen-Nuremberg, Chair for Pattern Recognition (Informatik 5), Martenstr. 3, D-91058 Erlangen, Germany.[19]Dorthe Meyer, “Human Gait Classification Based on Hidden Markov Models”, University Erlangen-Nuremberg, Lehrstuhl for Musterekennung (Informatik 5), Martenstr. 3, D-91058 Erlangen, Germany.[20]王嘉銘, “利用可調式區塊比對並結合多圖像資訊之影像運動向量估測”, 中央大學資訊工程研究所碩士論文, 2000.[21]Horn B.K.P and Schunck B.G. (1981), “Determining optical flow”, AI 17,pp. 185-204.[22]Lucas B.D. (1984) “Generalized Image Matching by the Method of Differences”, PhD Dissertation, Dept. of Computer Science, Carnegie-Mellon University.[23]Lucas, B. and Kanade, T. (1981)”An iterative image registration technique with an application to stereo vision”, Proc. DARPA IU Workshop, pp. 121-130.[24]Nagel H.H. (1983) “Displacement vectors derived from second-order intensity variations in image sequences”, CGIP 21, pp. 85-117[25]Nagel H.H. (1989) “On the estimation of optical flow: Relations between different approaches and some new results”, AI 33, pp. 299-324[26]Nagel H.H. and Enkelmann W. (1986) “An investigation of smoothness constraints for the estimation if displacement vector fields from image sequence”, IEEE Trans. PAMI 9, pp. 168-176[27]Uras S., Girosi F., Verri A. and Torre V. (1988) “A computational approach to motion perception”, Biol. Cybern 60, pp.79-97[28]Anandan P. (1987) “Measuring Vision Motion from Image Sequence”, PhD dissertation, COINS TR 87-21, Univ. of Massachusetts, Amherest. MA[29]Anandan P. (1989) “A computational framework and an algorithm for the measurement of vision motion”, Int. J. Comp. Vision 2, pp. 283-310[30]Singh A. (1990) “An estimation-theoretic framework for image-flow computation”, Proc. IEEE ICCV, Osaka, pp. 168-177[31]Singh A. (1992) “Optical Flow Computation: A Unified Perspective”, IEEE Computer Society Press.[32]Heeger D.J. (1987) “Model for the extraction of image flow”, J, Comp. Vision 1, pp. 279-302[33]Heeger D.J. (1988) “Optical Flow using spatiotemporal filters”, Int. J. Comp. Vision 1, pp. 279-302[34]Waxman A.M., Wu, J. and Bergholm F. (1988) “Converected activation profiles and receptive fields for real time measurement of short range vision motion”, Proc. IEEE CVPR, Ann Arbor, pp. 717-723[35]Fleet D.J and Jepson A.D. (1990) “Computation of component image velocity from local phase information”, Int. J. Comp. Vision 5, pp. 77-104[36]Kentaro Toyama, John Krumm, Barry Brumitt, Brian Meyers, “Wallflower: Principles and Practice of Background Maintenance”, Microsoft Research Redmond, WA 98052[37]Alan J. Lipton, “Virtual Postman – Real-Time, Interactive Virtual Video”, The Robotics Institute, Carnegie Mellon university, 5000 Forbes Ave, Pittsburgh, PA, 15213[38]T. Koga, K. Iinuma, A.Hirano, Y.Iijima, and T. Ishiguro, “Motion-Compensated interframe coding for video conferencing”, Nippon Electric Co. Ltd., Kawaski, Japan, 1981.[39]Y. Chen, Y. Hung and C. Fuh, “A Fast Block Matching Algorithm Based on the Winner-Update Strategy”, Proceedings of the Fourth Asian Conference on Computer Vision, vol. 2, pp. 977-928, 2000.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔