跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/08 21:32
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:賴敬閔
研究生(外文):Ching-min Lai
論文名稱:GPS視覺追蹤服務從室外到室內攝影機之無縫融合的研究
論文名稱(外文):The Study of Seamless Fusion GPS-VT from Outdoor to Indoor Cameras
指導教授:廖珗洲廖珗洲引用關係
指導教授(外文):Hsien-Chou Liao
學位類別:碩士
校院名稱:朝陽科技大學
系所名稱:資訊工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:44
中文關鍵詞:移動物體偵測多攝影機追蹤位置相依服務智慧型監控系統
外文關鍵詞:moving object detectionmultiple camera trackinglocation-based serviceintelligent surveillance system
相關次數:
  • 被引用被引用:0
  • 點閱點閱:465
  • 評分評分:
  • 下載下載:35
  • 收藏至我的研究室書目清單書目收藏:2
近年來,攝影機和監控系統已被廣泛裝設於街道、社區、學校及高樓大廈等區域,尤其在都會區更為普遍;而監視系統中移動物體追蹤是最基本的功能之一,在先前的研究我們開發出一套結合GPS及影像處理技術的視覺追蹤系統(GPS-VT: Global Positioning System Visual Tracking),此系統可以在室外環境根據GPS坐標來定位和追踪目標物體。然而,當目標物體進入室內環境時,系統即會因為失去GPS訊號而無法使用,故本研究的目標是提供一個GPS- VT室內外移動物體追蹤服務。
為了達到上述目的,本研究將運用樣板比對、陰影移除及遮蔽偵測等影像處理技術,在室內多攝影機環境下進行持續追蹤的動作;此外,由於目標物體離開目前監視區域到進入下一台攝影機所經過的時間差不是一個固定值,故本系統提出一個自動估算時間差的機制,利用高斯分佈函數來估計一個合理時間差的區間,來建立任兩台相關攝影機之間的時間差區間;而實作的雛型系統展示本方法可以擴展GPS-VT之服務範圍,從室外延伸到室內多攝影機環境中,達成穩定且無間斷追蹤的功能。
Cameras and surveillance systems are widely installed at streets, communities, schools, buildings and so on, especially in the urban area. The tracking of a moving object is a common function of the surveillance system. In our previous results, the global positioning system (GPS) was incorporated with the visual tracking technique, called GPS-VT. A person can be located and tracking according his GPS coordinate in the outdoor environment. However, GPS-VT service is unavailable when the person walks in an indoor environment and lost the GPS signal. Therefore, the purpose of this study is to provide a seamless fusion of GPS-VT service for persons in the indoor and outdoor environment.
In order to achieve the above purpose, template matching, shadow removal, and collision resolution techniques are used for the tracking among multiple cameras in the indoor environment. Besides, the elapsed time of a person leaving the field-of-views (FOV) of one camera and entering the FOV of another is not a fixed value. An automatic estimation mechanism is designed and a normal distribution function is used to estimate an acceptable interval of the elapsed time. Such a mechanism is performed for any two successive cameras. Besides, a prototype was implemented to demonstrate that the GPS-VT service is extended from outdoor to indoor environment with multiple cameras.
中文摘要 I
Abstract II
目錄 VI
表目錄 VIII
圖目錄 IX
第一章 簡介 1
第二章 文獻探討 5
第三章 系統設計 12
3.1 流程設計考量因素 12
3.2 系統運作流程 12
3.3 初始化階段(Initialization Phase) 13
3.3.1 時間差估算 14
3.3.2 異常時間差剔除機制(Elapsed Time Outlier Rejection) 15
3.4 目標鎖定階段(Locating Phase) 17
3.4.1 移動物體偵測 17
3.4.2 直方圖比對方法 22
3.5 追蹤階段(Tracking Phase) 23
3.5.1 遮蔽偵測 23
3.5.2 遮蔽解決 24
3.6 異常處理(Exception disposal) 27
第四章 實驗分析 28
4.1系統執行畫面 28
4.2估算時間差實驗 29
4.3多攝影機實際追蹤展示 31
第五章 結論及未來工作 38
參考文獻 40
附錄一 系統完整追蹤目標物體資料 43
表目錄
表 1:時間差範例資料 ................................................................................... 16
表 2:異常時間差剔除過程範例 ................................................................... 16
表 3:系統開發環境 ....................................................................................... 28
表 4:人工比對後正確的時間差 ................................................................... 30
表 5:無縫合追蹤結果統整表 ....................................................................... 35
表 6:系統運算時間之實驗結果統計表 ....................................................... 36
圖目錄
圖 1:GPS視覺追踪服務運作示意圖 ............................................................ 2
圖 2:視覺追蹤的情境示意圖 ......................................................................... 3
圖 3:結合激光雷達的車距偵測系統 ............................................................. 6
圖 4:T. Miyaki等人所提出的行人視覺追蹤方法示意圖 ............................ 7
圖 5:結合雷射掃描器之行人偵測法系統環境示意圖 ................................. 8
圖 6:平面追蹤對應模型範例圖 ..................................................................... 9
圖 7:詞彙樹方法架構圖 ............................................................................... 10
圖 8:系統運作流程圖 ................................................................................... 13
圖 9:時間差常態分佈範例圖 ....................................................................... 15
圖 10:時間差統計長條圖 ............................................................................. 15
圖 11:移動物體偵測比較圖 ......................................................................... 19
圖 12:背景建構展示圖 ................................................................................. 20
圖 13:陰影濾除展示圖 ................................................................................. 21
圖 14:亮度調整展示圖 ................................................................................. 23
圖 15:樣板比對解決物體遮蔽範例一 ......................................................... 26
圖 16:樣板比對解決物體遮蔽範例二 ......................................................... 26
圖 17:系統介面 ............................................................................................. 29
圖 18:時間差常態分佈展示圖 ..................................................................... 31
圖 19:目標物體追蹤展示圖 ......................................................................... 33
圖 20:攝影機架設環境示意圖 ..................................................................... 35
圖 21:RFID視覺追蹤服務示意圖 ............................................................... 39
[1] A. Dick and M. Brooks, “A Stochastic Approach to Tracking Objects Across Multiple Cameras,” Australian Conference on Artificial Intelligence, pp. 160-170, 2004.
[2] A. Gilbert, R. Bowden, “Tracking Objects Across Cameras by Incrementally Learning Inter-camera Colour Calibration and Patterns of Activity,” Proc. European Conference Computer Vision, pp. 125-136, 2006.
[3] AForge.Net Framework:
http://code.google.com/p/aforge
[4] C. Stauffer and K. Tieu, “Automated Multi-camera Planar Tracking Correspondence Modeling,” Proc. of the IEEE Computer Vision and Pattern Recognition, pp. 259-266, July 2003.
[5] C. Zhang, J. Wu, and G. Tu, “Object Tracking and QOS Control using Infrared Sensor and Video Cameras,” Proc. of the IEEE International Conference on Networking, Sensing and Control, pp. 974-979, 2006.
[6] D. Makris, T. Ellis, and J. Black, "Bridging the Gaps between Cameras," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June, Washington DC, USA, pp. 205-210, 2004.
[7] F. Porikli and A. Divakaran, “Multi-camera Calibration, Object Tracking and Query Generation,” IEEE International Conference on Multimedia and Expo (ICME), Vol. 1, pp. 653-656, July 2003.
[8] H.C. Liao and P.T. Chu, “A Novel Visual Tracking Approach Incorporating Global Positioning System in a Ubiquitous Camera Environment,” Information Technology Journal, Vol. 8, No. 4, pp. 465-475, 2009.
[9] H.C. Liao and H.J. Wu, “Automatic Camera Calibration and Rectification Methods,” Measurement + Control Journal, Vol. 43, No. 8, pp. 251-254, Oct. 2010.
[10] H. Weigel, P. Lindner, and G. Wanielik, “Vehicle Tracking with Lane Assignment by Camera and Lidar Sensor Fusion,” Proc. of the 2009 Intelligent Vehicles Symposium, pp.513-520, 2009.
[11] J. Black and T. Ellis, “Multi Camera Image Tracking,” Proc. of the IEEE International Workshop Performance Evaluation of Tracking and Surveillance, pp. 68–75, December 2001.
[12] J. Black, T. Ellis, and P. Rosin, “Multi View Image Surveillance and Tracking,” IEEE Workshop on Motion and Video Computing, pp. 169-174, Dec. 2002.
[13] J. Cui, H. Zha, H. Zhao, and R. Shibasaki, “Multi-modal Tracking of People using Laser Scanners and Video Camera”, Image and Vision Computing, Vol. 26, No. 2, pp.240-252,2008.
[14] J.M. Choi, Y.J. Yoo, and J.Y. Choi, “Adaptive Shadow Estimator for Removing Shadow of Moving Object,” Computer Vision and Image Understanding, Vol. 114, No. 9, pp. 1017-1029, September 2010.
[15] Luis F. Teixeira, Luis Corte-Real, “Video Object Matching Across Multiple Independent Views using Local Descriptors and Adaptive Learning,” Patten Recognition Letters, Vol. 30, No. 2, pp. 157-167, 15 January 2009.
[16] O. Javed, K. Shafique, and M. Shah, “Appearance Modeling for Tracking in Multiple Non-overlapping Cameras,” IEEE Conf. of Computer Vision and Pattern Recognition (CVPR), Vol. 2, pp. 26-33, 2005.
[17] S. Khan, O. Javed, Z. Rasheed, and M. Shah, “Human Tracking in Multiple Cameras,” Proc. of Eighth IEEE International Conference on Computer Vision (ICCV 2001), Vol. 1, pp. 331-336, 2001.
[18] T. Miyaki, T. Yamasaki, and K. Aizawa, “Visual Tracking of Pedestrians Jointly using Wi-Fi Location System on Distributed Camera Network”, Proceedings of IEEE International Conference on Multimedia and Expo, pp.1762-1765.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top