(3.235.139.152) 您好!臺灣時間:2021/05/08 17:27
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:劉杰宇
研究生(外文):Liu, Chieh-Yu
論文名稱:在非重疊視角多相機下以動態程序及隱藏式馬可夫模型為核心之人員活動串接記錄技術
論文名稱(外文):Human Activity Linkage Recording using Dynamic Programming and Hidden Markov Models for Multiple Cameras with Disjoint Views
指導教授:黃仲陵黃仲陵引用關係張意政
指導教授(外文):Huang, Chung-LinChang, I-Cheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:98
語文別:英文
論文頁數:72
中文關鍵詞:多相機人員活動串接紀錄隱藏式馬可夫模型
外文關鍵詞:multiple camerashuman activity linkageHidden Markov Model
相關次數:
  • 被引用被引用:0
  • 點閱點閱:196
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:20
  • 收藏至我的研究室書目清單書目收藏:0
近年來隨著犯罪等社會事件層出不窮,安全的議題越來越受現代人的重視,也造就監視系統的蓬勃發展與普及。日常生活中,我們可以在住家、機場、車站、百貨公司等場所看到監視系統林立,但是傳統的攝影機己經無法符合現代人對於便捷的渴望,所以智慧化的監視系統因此孕育而生。
物體追蹤在電腦視覺領域是一個很熱門的議題。尤其在擁擠的環境中,如何利用有限的攝影機來追蹤並紀錄特定人員的路徑變成一大挑戰。過去的研究較常見到以攝影機為主的系統,針對一定的區域去做人物追蹤,我們所得到的資訊侷限於單一攝影機,本系統將各個攝影機所得到的資訊加以整合,建立一個以人為主的資料庫,我們可以從資料庫中得知被追蹤人物的行動路徑。本論文主要是藉由攝影機錄製的影片來訓練出一組狀態轉換的機率模型,並將此模型用來處理各個區域(exit/entry)之間的時間與空間關係,利用時間與空間的資訊,加上各個人物之間的色彩相似度,來找出其相關性。除此之外,我們利用動態程序演算法,往回尋找更多相關的資訊,並重新作比對,使得資訊出錯造成遺失的人物可以重新串接,且繼續對此人物做追蹤。我們將各個被追蹤人物的行動路徑存入資料庫,使用隱藏式馬可夫模型找出常態路徑,並對非常態路徑的串接記錄作修正,以解決部分因遮蔽或者色彩改變造成人物串接出錯的情形。
Recently, the study of human object tracking has been apparently moved from camera-based representation to object-based representation because object-based representation can assist people to trace the human behavior and search for the abnormal conditions effectively.
The thesis proposes an automatic object-based tracking system using distributed multiple cameras system with non-overlapping viewing range. The goal of tracking between multiple cameras with disjoint view is to establish a set of correspondence between observations of objects across multiple cameras. Two visual cues, spatiotemporal cue and appearance cue, are used for tracking human objects across cameras. To learn the relationships among cameras, we use batch-learning procedure and update all probability matrixes constantly for long-term monitoring. Also, we improve the correspondence of appearance cue by color calibration among different cameras.
Under certain conditions, the tracking of human objects may be lost due to light variation, unusual behavior, or slightly color change of clothes in different camera views. The proposed work use the dynamic programming algorithm to backward tracking with spatiotemporal relationships to search more information. We can use this information to link the missing linkage in the tracking path. Hidden Markov Models are further used to verify the abnormal path of human object across multiple cameras according the training data. Experimental results show the efficiency of the proposed method.
CONTENTS
Chapter 1 Introduction
1.1 Motivation
1.2 Related Works
1.3 System Overview
1.4 Organization of the Thesis
Chapter 2 Pre-processing and Feature Extraction for Single Camera Capturing
2.1 Foreground Extraction
2.1.1 Background Subtraction
2.1.2 Morphological Filtering
2.1.3 Labeling
2.2 Entry/Exit Zone Identification
2.2.1 Gaussian Mixture Model
2.2.2 Expectation Maximization Algorithm
Chapter 3 Object-based People Tracking across Multiple Cameras
3.1 Observation Model
3.1.1 Color Histogram
3.1.2 Color Calibration
3.2 Learning Camera Network Topology
3.3 Correspondence Analysis
Chapter 4 Missing Object Tracking across Multiple Cameras
4.1 Missing object tracking using Dynamic Programming algorithm
4.2 Occluded Object Recognition
4.3 Path Modification using Hidden Markov Model
4.3.1 Regular Path Establishment for Normal Condition
4.3.2 Regular Path Recognition
4.3.3 Correspondence Analysis
Chapter 5 Experimental Results
5.1 MFC Interface
5.2 Single Person Tracking
5.3 Multiple Persons Tracking
Chapter 6 Conclusions
References
[1] J. Black, T. Ellis, and D. Makris, “Wide Area Surveilaance with a Multi-Camera Network,” In Proc. IDSS-04 Intelligent Distributed Surveillance Systems, pp. 21-25, 2003.
[2] T. Chang and S. Gong, “Tracking Multiple People with a Multi-Camera Cystem,” In IEEE Workshop on Multi-Object Tracking, 2001.
[3] S. L. Dockstader and A. M. Tekalp, “Multiple Camera Fusion for Multi-object Tracking,” In IEEE Workshop on Multi-Object Tracking, 2001.
[4] L . Lee, R. Romano, and G. Stein, “Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame,” IEEE Trans. Pattern Analysis and Machine Intelligence, 22(8), pp.758-768, Aug 2000.
[5] S. Khan, O. Javed, Z. Rasheed, and M. Shah, “Human Tracking in Multiple Cameras,” In Proceedings of ICCV, 2001.
[6] Q. Cai and J. K. Aggarwal, “Tracking Human Motion in Structured Environments Using a Distributed-Camera System,” IEEE Trans. Pattern Analysis and Machine Intelligence, 21(11), pp.1241-1247, Nov. 1999.
[7] S. Khan and M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View”. IEEE Trans. Pattern Analysis and Machine Intelligence, 25(10), pp. 1355-1360, Oct. 2003.
[8] C. Stauffer and K. Tieu, “Automated Multi-camera Planar Tracking Correspondence Modeling,” In IEEE International Conference on Computer Vision, 2003.
[9] V. Kettnaker and R. Zabih, “Counting People from Multiple Cameras,” In IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, pp. 267-271, 1999.
[10] V. Kettnaker and R. Zabih, “Bayesian Multi-camera Surveillance”. In IEEE Conference on Computer Vision and Pattern Recognition, 1999.
[11] F. Porikli and A. Divakaran, “Multi-Camera Calibration, Object Tracking and Query Generation,” In IEEE International Conference on Multimedia and Expo, 2003.
[11] O. Javed, Z. Rasheed, K. Shafique, and M. Shah, “Tracking across Multiple Cameras with Disjoint Views,” In IEEE Conference on Computer Vision and Pattern Recognition, 2005.
[13] A. Dick and M. Brooks, “A Stochastic Approach to Tracking Objects across Multiple Cameras,” In Australian Conference on Artificial Intelligence, pp.160-170, 2004.
[14] T. J. Ellis, D. Makris, and J. Black, “Learning a Multi-Camera Topology,” In Joint IEEE Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS), pp. 165-171, 2003.
[15] C. Stauffer, “Learning to Track Objects through Unobserved Regions,” In IEEE Computer Society Workshop on Motion and Video Computing, pp. 96-102, 2005.
[16] K. Tieu, G. Dalley, and W. Grimson, “Inference of Non-overlapping Camera Network Topology by Measuring Statistical Dependence,” In Proceeding of IEEE International Conference on Computer Vision, pp. 1842-1849, 2005.
[17] A. Gilbert, R. Bowden, “Tracking Objects Across Cameras by Incrementally Learning Inter-camera Colour Calibration and Patterns of Activity,” In Proc European Conference Computer Vision, pp. 125-136, 2006.
[18] N. Yunyoung, R. Junghun, C. Yoo-Joo, and C. We-Duke, “Learning Spatio-Temporal Topology of Multi-Camera Network by Tracking Multiple People,” In Proceedings of World Academy of Science, Engineering, and Technology, Vol. 24, Oct 2007.
[19] D.L. Ruderman, T.W. Cronin, and C.C. Chiao, “Statistics of Cone Responses to Natural Image: Implications for Visual Coding,” J. Optical Soc. Of America, vol. 15, no. 8, 1998, pp. 2036-2045.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔