跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.17) 您好!臺灣時間:2025/09/03 10:04
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳柏桐
研究生(外文):Po-Tung Chen
論文名稱:多監控攝影機在重疊與非重疊視野區域的人員追蹤技術
論文名稱(外文):People Tracking Technology in a Multi-Camera Environment with Overlap and Non-Overlap Region
指導教授:林春宏林春宏引用關係
指導教授(外文):Chuen-Horng Lin
學位類別:碩士
校院名稱:國立臺中科技大學
系所名稱:資訊工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:中文
論文頁數:54
中文關鍵詞:視訊安全監控系統適應性CamShift演算法重疊/非重疊視野區域人員追蹤
外文關鍵詞:video security surveillance systemadaptive CamShift algorithmoverlap and non-overlap regionpeople tracking
相關次數:
  • 被引用被引用:0
  • 點閱點閱:282
  • 評分評分:
  • 下載下載:26
  • 收藏至我的研究室書目清單書目收藏:0
本研究主要針對在多監控攝影機的重疊與非重疊視野區域進行人員追蹤技術。視訊安全監控系統所觀察到的場景,並非只是一個一個獨立監控攝影機,而是有互補作用的進行監控。但因為成本上的考量,視訊安全監控系統通常安裝了一些少許的視野重疊的監控攝影機,更多的是非重疊視野區域的監控攝影機。因此,本文所研究多監控攝影機之自動檢測、追蹤、識別以及分析,仍分為重疊視野區域及非重疊視野區域。
本研究在多監控攝影機的重疊與非重疊視野區域進行人員追蹤技術,其流程先利用校正板對每台監控攝影機進行校正,然後做高斯混合模型的背景,以進行人員的偵測,再透過適應性CamShift演算法進行人員的追蹤。接著根據多監控攝影機之拓撲關係,將視野區域分為重疊和非重疊的兩種類型,並建構出重疊視野區域及非重疊視野區域的拓撲關係,之後為了能在多監控攝影機持續追蹤人員,而建立追蹤人員的樣板顏色直方圖,並與其他監控攝影機的追蹤人員顏色直方圖,進行相似度的計算,以做為人員的辨識,因此,在多監控攝影機重疊視野區域及非重疊視野區域下,可建置完整的智慧型多監控追蹤系統。
為了驗證本研究方法的實用性,而分別採用了兩種不同的連續影像進行實驗分析,當中的重疊視野區域監控影片資料是自行拍攝,非重疊視野區域監控影片資料是採用MCT Database。本文所提出的多監控攝影機在重疊與非重疊視野區域之人員追蹤技術,在單一個監控攝影機多個人員追蹤、重疊視野區域的多監控攝影機之拓撲關係、非重疊視野區域的多監控攝影機之拓撲關係、重疊視野區域的多監控攝影機人員追蹤、非重疊視野區域的多監控攝影機人員追蹤。其實總結果不會因為監控攝影機的型號規格不同、影像大小不一致、光線差異、角度、姿勢、不均勻服裝等問題而影響到其人員追蹤的精確度,而且能夠正確將人員進行追蹤。


In this study, people tracking technology is performed in a multi-camera environment with overlap and non-overlap region. The scene observed by video security surveillance system is not just one by one independent camera, instead, the surveillance is conducted in supplementary way. However, due to cost consideration, the video security surveillance system is usually installed with some cameras environment with overlap region, and more cameras environment have non-overlap region. Therefore, the automatic detect, tracking, identification and analysis of multi-camera as studied in this paper can still be divided into overlap and non-overlap region.
In this study, people tracking technology is performed in a multi-camera environment with overlap and non-overlap region. In the actual process, calibration plate is used first to calibrate each camera, then the background of Gaussian mixture model (GMM) is made for conducting people detection, then through adaptive CamShift algorithm, people tracking is conducted. Next, according to the topological relation of multi-camera, the region are divided into overlapping and non-overlapping types, and the topological relation between the overlap and non-overlap region is then constructed, later on, in order to be able to track continuously the people under multi-camera, the histogram of template of color of people under tracking is set up, meanwhile, a similarity calculation is made with the histogram of color of other people under tracking in other cameras to be used for people identification, therefore, under the overlap and non-overlap region of multi-camera, it is easy to set up a complete intelligent multiple surveillance tracking system.
In order to verify the practicality of this research method, two different continuous images are adopted respectively for experimental analysis, among them, the surveillance video data of the overlap region is taken personally, and the surveillance video data of the non-overlap region is from MCT Database. Actually, for the people tracking technology in a multi-camera environment with overlap and non-overlap region mentioned in the paper, no matter in multiple people tracking under single camera, in topological relation among multi-camera of overlap region, in topological relation among multi-camera of non-overlap region, in people tracking in a multi-camera environment with overlap region, or in people tracking in a multi-camera environment with non-overlap region, the overall result and tracking accuracy will not be affected by of the model number and spec of cameras, inconsistent image size, difference in light, angle and gesture, and issues such as non-uniform costume, moreover, people can be correctly tracked.


中文摘要I
英文摘要III
誌謝V
目次VI
表目次IX
圖目次X
第一章 緒論1
1.1研究背景1
1.2研究動機1
1.3文獻回顧3
1.4研究目的5
1.5論文架構7
第二章 相關研究8
2.1前言8
2.2監控攝影機校正 8
2.3拓撲結構9
2.4物件辨識11
2.4.1顏色11
2.4.2紋理11
2.4.3形狀11
2.5多監控攝影機追蹤11
2.5.1監控攝影機校正基於重疊視野區域的多監控攝影機物件追蹤12
2.5.2時空和外觀線索基於非重疊視野區域的多監控攝影機物件追蹤12
2.6物件的活動分析 12
第三章 本文研究方法13
3.1系統架構與流程 13
3.2監控攝影機校正 14
3.2.1監控攝影機內部參數14
3.2.2監控攝影機外部參數16
3.3監控攝影機幾何轉換18
3.3.1平面投影轉換矩陣或稱為Homography矩陣18
3.3.2 Homography矩陣的計算19
3.3.3 Homography矩陣的評估20
3.4單一個監控攝影機多個人員的追蹤22
3.4.1高斯混合模型之背景建立22
3.4.2單個人員偵測 24
3.4.3基於適應性CamShift之單個人員追蹤24
3.4.4多個人員追蹤 26
3.5多監控攝影機之拓撲關係27
3.5.1重疊視野區域之拓撲估計28
3.5.2非重疊視野區域之拓撲估計29
3.6 人員辨識31
3.6.1人員特徵辨識 31
3.7多監控攝影機之人員追蹤32
3.7.1重疊視野區域的多監控攝影機之人員追蹤32
3.7.2非重疊視野區域的多監控攝影機之人員追蹤33
第四章 實驗結果34
4.1 實驗監控影片資料庫34
4.1.1實驗監控影片資料取得與假設34
4.1.1.1重疊視野區域之監控影片資料34
4.1.1.2非重疊視野區域之監控影片資料35
4.1.2 實驗環境36
4.2 重疊視野區域的人員追蹤36
4.2.1單一個監控攝影機多個人員追蹤實驗影像37
4.2.2單一個監控攝影機Surf偵測獨特的特徵點實驗影像38
4.2.3重疊視野區域的多監控攝影機之拓撲關係實驗影像39
4.2.4重疊視野區域的多監控攝影機人員追蹤實驗影像40
4.3非重疊視野區域的人員追蹤42
4.3.1單一個監控攝影機多個人員追蹤實驗影像42
4.3.2單一個監控攝影機進入與離開的區域位置之偵測實驗影像43
4.3.3非重疊視野區域的多監控攝影機之拓撲關係實驗影像44
4.3.4非重疊視野區域的多監控攝影機人員追蹤實驗影像45
第五章 結論與未來發展47
參考文獻49


[1]C. Stauffer and K. Tieu, “Automated multi-camera planar tracking correspondence modeling”, In CVPR, 2003.
[2]O. Faugeras, “Three Dimensional Computer Vision: A Geometric Viewpoint”, MIT Press, 1993.
[3]B. Triggs, “Camera pose and calibration from 4 or 5 known 3d points”, In: Proc. IEEE Internat Conf. Computer Vision, 1999.
[4]G. A. Jones, J. R. Renno, P. Remagnino, “Auto-calibration in multiple-camera surveillance environments”, In: Proc. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2002.
[5]R.I. Hartley, A. Zisserman, “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2004.
[6]M. Brown, D. Lowe, “Recognising panoramas”, In: Proc. IEEE Internat Conf. Computer Vision, 2003.
[7]P. Baker, Y. Aloimonos, “Calibration of a multicamera network”, In: Proc. Omnivis 2003: Omnidirectional Vision and Camera Networks, 2003.
[8]J. Jannotti, J. Mao, “Distributed calibration of smart cameras”, In: Proc. Workshop on Distributed Smart Cameras, 2006.
[9]C. Harris, M. Stephens, “A combined corner and edge detector”, In: Proc. Alvey Vision Conference, 1988.
[10]D. Lowe, “Distinctive image features from scale-invariant keypoints”, Internat. J. Comput. Vision, vol. 60 (2), pp. 91–110, 2004.
[11]H. Bay, T. Tuytelaars, L. V. Gool, “Surf: Speed up robust features”, In: Proc. European Conf. Computer Vision, 2006.
[12]W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors”, IEEE Systems, Man, and Cybernetics Society, vol. 34 (3), pp. 334–352, 2004.
[13]B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision”, International Joint Conference on Artificial Intelligence, pp. 674-679, 1981.
[14]B. K. P. Horn and B. G. Schunck, “Determining optical flow”, Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203, August 1981.
[15]P. Kaewtrakulpong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection”, In Proceedings of European Workshop Advanced Video Based Surveillance Systems, 2001.
[16]S. C. Jeng, “A GMM-based method for dynamic background image model construction with shadow removal”, Master These, National Chiao-Tung University, Electrical and Computer Engineering, June, 2005.
[17]A. Dempster, N. Laird, and D. Rubin, “Maximum likelihood from incomplete data via the EM algorithm”, Journal of the Royal Statistical Society, vol. 39, no. 1, pp. 1-38, 1977.
[18]C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking”, IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246-252, 1999.
[19]C. H. Lin, Y. K. Chan and C. C. Chen, “Detection and segmentation of cervical cell cytoplast and nucleus”, International Journal of Imaging Systems and Technology, vol. 19, pp. 260-270, 2009.
[20]C. H. Lin and Y. J. Syu, “Fast segmentation of porcelain images based on texture features”, Journal of Visual Communication and Image Representation, vol. 21, pp. 707-721, 2010.
[21]C. H. Lin and C. C. Chen, “Image segmentation based on edge detection and region growing for thinprep-cervical smear”, International Journal of Pattern Recognition and Artificial Intelligence, vol. 24(7), pp. 1061-1089, 2010.
[22]D. Comaniciu and P. Mee, “Mean shift: A robust approach toward feature space analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, May 2002.
[23]Z. Han, Q. Ye, J. Jiao, “Online feature evaluation for object tracking using Kalman filter”, International Conference on Pattern Recognition, 2008.
[24]L. Bazzani, M. Cristani and V. Murino, “Decentralized particle filter for joint individual-group tracking”, IEEE Computer Vision and Pattern Recognition, 2012.
[25]O. Zoidi, A. Tefas and Ioannis Pitas, “Visual object tracking based on local steering kernels and color histograms”, IEEE Transactions On Circuits And Systems For Video Technology, vol. 23, no. 5, pp. 870-882, May 2013.
[26]B. Karasulu, S. Korukoglu, “Moving object detection and tracking by using annealed background subtraction method in videos: performance optimization”, Expert Systems with Applications, vol. 39, pp. 33–43, 2012.
[27]H. Naeem, J. Ahmad and M. Tayyab, “Real-time object detection and tracking”, The 16th International Multi Topic Conference, pp. 148–153, Dec. 2013.
[28]N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection”, IEEE Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
[29]G. Shu, A. Dehghan, O. Oreifej, E.Hand and M. Shah, “Part-based multiple-person tracking with partial occlusion handling”, IEEE Computer Vision and Pattern Recognition, pp. 1815-1821, 2012.
[30]S. Khan and M. Shah, “Consistent labeling of tracked objects in multiple cameras with overlapping fields of view”, Intl. Jr. of Pattern Recognition and Artificial Intelligence, vol. 25, no. 10, pp. 1355-1360, Oct. 2003.
[31]S. Khan and M. Shah, “A multiview approach to tracking people in crowded scenes using a planar homography constraint”, In: Proc. European Conf. Computer Vision, 2006.
[32]M. Liem and M. Gavrila, “Multi-person tracking with overlapping cameras in complex, dynamic environments”, In Proc. BMVC, 2009.
[33]X. Chen, K. Huang, T. Tan, “Object tracking across non-overlapping views by learning inter-camera transfer models”, Pattern Recognition, vol. 47, pp. 1126-1137, 2014.
[34]O. Javed, Z. Rasheed, K. Shafique, and M. Shah, “Tracking across Multiple Cameras with Disjoint Views”, IEEE Conference on Computer Vision, pp. 952-957, Oct. 2003.
[35]V. Kettnaker and R. Zabih, “Bayesian multi-camera surveillance”, In: Proceedings of the Computer Vision and Pattern Recognition, 1999.
[36]M.O. Mehmood, “Multi-camera based human tracking with non-overlapping fields of view”, Application of Information and Communication Technologies, pp. 1-6, Oct. 2009.
[37]W. Xiaogang, “Intelligent multi-camera video surveillance: a review”, Pattern Recognition Letters, vol. 34, pp. 3–19, 2013.
[38]R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses”, IEEE Journal of Robotics and Automation, vol. RA-3, no. 4, pp. 323-344, August 1987.
[39]S. Bougnoux, “From projective to euclidean space under any practical situation, a criticism of self-calibration”, Proceedings of Sixth IEEE International Conference on Computer Vision, pp. 790-796, 1998.
[40]R. J. Radke, “A survey of distributed computer vision algorithms”, in Handbook of Ambient Intelligence and Smart Environments, Springer US, pp. 35–55, 2010.
[41]A. Van Den Hengel, A. Dick, H. Detmold, A. Cichowski and R. Hill, “Finding camera overlap in large surveillance networks”, in Proceedings of the eighth Asian conference on Computer Vision-Volume Part I, Springer-Verlag, Berlin, Heidelberg, pp. 375-384, 2007.
[42]D. Makris, T. Ellis, J. Black, “Bridging the gaps between cameras”, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 205-210, 2004.
[43]U. Park, A. Jain, I. Kitahara, K. Kogure, N. Hagita, “Vise: visual search engine using multiple networked cameras”, In: Proc. IEEE Internat. Conf. Pattern Recognition, pp. 1204–1207, 2006.
[44]O. Hamdoun, F. Moutarde, B. Stanciulescu, B. Steux, “Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences”, In: Proc. IEEE Conference on Distributed Smart Cameras, pp. 1-6, 2008.
[45]J.G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters”, Journal of Optical Society of America A, vol. 2, no. 7, pp. 1160-1169, 1985.
[46]T. Ojala, M. Pietikainen, T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns”, IEEE Transactions Pattern Analysis Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[47]A. Rahimi, B. Dunagan, T. Darrell, “Simultaneous calibration and tracking with a network of non-overlapping sensors”, In: IEEE Computer Society Conference on Computer Vision and Pattern, vol. 1, pp. 187-194, 2004.
[48]J. Black, T. Ellis, and P. Rosin, “Multi view image surveillance and tracking”, IEEE Workshop on Motion and Video Computing, pp. 169-174, Dec. 2002.
[49]T. Huang and S. Russell, “Object identification in a bayesian context”, In: International Joint Conference on Artificial Intelligence, pp. 1276-1283, 1997.
[50]K. Chen, C. Lai, Y. Hung, C. Chen, “An adaptive learning method for target tracking across multiple cameras”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, June 2008.
[51]E. Zelniker, S. Gong, T. Xiang, “Global abnormal behaviour detection using a network of cctv cameras”, In Proc. International Workshop on Visual Surveillance, 2008.
[52]G. R. Bradski, “Computer vision face tracking for use in a perceptual user interface”, Intel Technology Journal, 2nd Quarter, 1998.
[53]J. MacQueen, “Some methods for classification and analysis of multivariate observations”, Proc. Of 5th Berkeley Symposium on Mathematical statistics and probability, University of California Press, Berkley, USA, vol. 1, pp. 281-297, 1967.
[54]MCT Database。線上檢索日期:2015年3月21日。網址:http://www.datatang.com/data/43784


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top