跳到主要內容

臺灣博碩士論文加值系統

(3.231.230.177) 您好!臺灣時間:2021/07/27 14:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:鍾承君
研究生(外文):Cheng-Chun Chung
論文名稱:統計式背景模型應用於視覺監視之研究
論文名稱(外文):Study on Statistical Background Modeling for Visual Surveillance
指導教授:鄭銘揚鄭銘揚引用關係
指導教授(外文):Ming-Yang Cheng
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2008
畢業學年度:96
語文別:中文
論文頁數:82
中文關鍵詞:視覺監視核心估測法背景模型高斯混合模型
外文關鍵詞:background modelGaussian mixture modelkernel density estimationvisual surveillance
相關次數:
  • 被引用被引用:0
  • 點閱點閱:210
  • 評分評分:
  • 下載下載:11
  • 收藏至我的研究室書目清單書目收藏:2
視覺監視系統之主要目的,在於偵測週遭範圍是否有可疑人物入侵或是環境有無發生異常變化;而移動物體偵測通常是視覺監視系統的第一步驟,後續的目標物追蹤或物件分類等功能,都需依賴移動物體偵測的準確與否;因此,如何從複雜的環境中,更準確且快速地偵測到移動物體為一重要課題。背景相減法為移動物偵測常用的一種方法,其偵測結果的好壞通常取決於系統是否能將環境的變化,即時且適當地更新至背景模型當中;因此,如何在變動的環境下,建立適當的背景模型是本論文探討的主軸。近年來利用統計式方法來建立背景模型的研究越來越受到重視,一般而言,統計式背景模型之建立主要分成兩種方法,一種為有母數法,包括高斯混合模型、空間高斯分佈等;另一種為無母數法,包括核心估測法、K鄰近法等。本論文針對高斯混合模型與核心估測兩種統計式背景模型的建立方式,探討其適用的場合及實際應用上的優缺點,最後以實驗驗證本論文所提出之觀點。
The main purpose of a visual surveillance system is to detect suspicious objects or erratic changes in the environment. In a visual surveillance system, object tracking and object classification rely on the accuracy of motion detection. Therefore, it is essential to quickly and accurately identify a moving object in an intricate environment. Background subtraction is commonly used in motion detection. The system must update the alteration of the environment into the background model quickly and accurately for best performance. A suitable background model for changing environments was developed in this thesis. In general, there are two methods of constructing a statistical background model. One is the parametric method, which includes the Gaussian mixture model and the spatial distribution of Gaussian. The other is the nonparametric method, which includes the kernel density estimation and the k-nearest neighbors method. In this thesis, several experiments have been conducted to provide a performance comparison between the Gaussian mixture model and kernel density estimation.
摘要.....................................................................I
誌謝...................................................................III
目錄...................................................................IV
圖目錄...............................................................VI
表目錄.............................................................VIII
第 1 章 緒論........................................................1
1.1 前言...............................................................1
1.2 研究方法與動機...........................................2
1.3 文獻回顧.......................................................6
1.4 論文架構.......................................................8
第 2 章 高斯混合模型之背景相減法..............10
2.1 簡介.............................................................10
2.2 高斯混合模型簡介.....................................12
2.3 高斯機率密度函數的初始參數估測.........15
2.4 灰階值的歸類.............................................18
2.5 背景模型建立與前景偵測.........................21
2.6 背景模型參數更新.....................................23
第 3 章 無母數模型之背景相減法..................30
3.1 簡介.............................................................30
3.2 核心密度估測法 ........................................32
3.3 核心寬度的選擇 ........................................36
3.4 背景模型建立與前景偵測.........................41
3.5 錯誤偵測.....................................................43
3.6 背景更新.....................................................47
第 4 章 實驗結果與討論..................................52
4.1 實驗介紹.....................................................52
4.2 實驗結果.....................................................53
4.2.1 GMM法與KDE法之比較.........................53
4.2.2 實驗場景一:無光源變化之場景..........63
4.2.3 實驗場景二:有光源變化之場景..........66
4.2.4 實驗場景三:大雨環境下之道路場景..69
4.3 實驗討論.....................................................72
第 5 章 結論與建議..........................................74
5.1 結論.............................................................74
5.2 未來展望與建議.........................................75
參考文獻...........................................................76
自 述..................................................................82
[1]C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 780-785, 1997.
[2]T. Olson and F. Brill, “Moving object detection and event recognition algorithms for smart cameras,” in Proceedings of DARPA Image Understanding Workshop, pp. 159-175, 1997.
[3]A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” in Proceedings of the IEEE Workshop Applications of Computer Vision, pp. 8-14, 1998.
[4]T. N. Tan, G. D. Sullivan, and K. D. Baker, “Model-based localisation and recognition of road vehicles,” International Journal of Computer Vision, vol. 27, pp. 5-25, 1998.
[5]I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 809-830, 2000.
[6]W. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on Systems, Man and Cybernetics-Part C: Application and Reviews, vol. 34, pp. 334-352, 2004.
[7]S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking groups of people,” Computer Vision and Image Understanding, vol. 80, pp. 42-56, 2000.
[8]A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, pp. 1151-1163, 2002.
[9]K. T. Song and J. C. Tai, “Real-time background estimation of traffic imagery using group-based histogram,” Journal of Information Science and Engineering, vol. 24, pp. 411-423, 2008.
[10]C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 246-252, 1999.
[11]M. Xu and T. Ellis, “Illumination-invariant motion detection using colour mixture models,” in Proceedings of the British Machine Vision Conference, pp. 163-172, 2001.
[12]J. Ng and S. Gong, “Learning pixel-wise signal energy for understanding semantics,” Image and Vision Computing, vol. 21, pp. 1183-1189, 2003.
[13]M. Harville, G. Gordon, and J. Woodfill, “Foreground segmentation using adaptive mixture models in color and depth,” in Proceedings of the IEEE Workshop on Detection and Recognition of Events in Video, pp. 3-11, 2001.
[14]X. Gao, T. E. Boult, F. Coetzee, and V. Ramesh, “Error analysis of background adaption,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 503-510, 2000.
[15]M. Harville, “A framework for high-level feedback to adaptive, per-pixel, mixture-of-gaussian background models,” in Proceedings of the European Conference on Computer Vision, pp. 543-60, 2002.
[16]Z. Zivkovic, “Improved adaptive gaussian mixture model for background subtraction,” in Proceedings of the International Conference on Pattern Recognition, pp. 1051-4651, 2004.
[17]N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proceedings of the Conference on Uncertainty in Artificial Intelligence, pp. 175-181, 1997.
[18]W. E. L. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using adaptive tracking to classify and monitor activities in a site,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 22-29, 1998.
[19]K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 255-261, 1999.
[20]Z. Zivkovic and F. van der Heijden, “Recursive unsupervised learning of finite mixture models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 651-656, 2004.
[21]Y. Ren, C. S. Chua, and Y. K. Ho, “Statistical background modeling for non-stationary camera,” Pattern Recognition Letters, vol. 24, pp. 183-196, 2003.
[22]Y. Ren, C. S. Chua, and Y. K. Ho, “Motion detection with nonstationary background,” Machine Vision and Applications, vol. 13, pp. 332-343, 2003.
[23]Y. Ren and C. S. Chua, “Motion detection from time-varied background,” International Journal of Image and Graphics, vol. 2, pp. 163-178, 2002.
[24]D. S. Lee, J. J. Hull, B. Erol, R. C. R. Center, and C. A. Menlo Park, “A bayesian framework for gaussian mixture background modeling,” in Proceedings of the International Conference on Image Processing, pp. 973-976, 2003.
[25]F. Porikli and O. Tuzel, “Human body tracking by adaptive background models and mean-shift analysis,” in Proceedings of the IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, pp. 1-9, 2003.
[26]O. Javed, K. Shafique, and M. Shah, “A hierarchical approach to robust background subtraction using color and gradient information,” in Proceedings of the Workshop on Motion and Video Computing, pp. 22-27, 2002.
[27]M. Cristani, M. Bicego, and V. Murino, “Integrated region-and pixel-based approach to background modeling,” in Proceedings of the Workshop on Motion and Video Computing, pp. 3-8, 2002.
[28]S. Y. Yang and C. T. Hsu, “Background modeling from GMM likelihood combined with spatial and color coherency,” in Proceedings of the IEEE International Conference on Image Processing, pp. 2801-2804, 2006.
[29]A. Elgammal, Efficient nonparametric kernel density estimation for real time computer vision, Ph. D. dissertation, University of Maryland, 2002.
[30]A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 302-309, 2004.
[31]Y. Liu, H. Yao, W. Gao, X. Chen, and D. Zhao, “Nonparametric background generation,” Journal of Visual Communication and Image Representation, vol. 18, pp. 253-263, 2007.
[32]P. M. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, pp. 1758-1763, 2007.
[33]T. Horprasert, D. Harwood, and L. S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in Proceedings of the IEEE Internaltional Conference on Computer Vision, Frame Rate Workshop, pp. 1-19, 1999.
[34]P. W. Power and J. A. Schoonees, “Understanding background mixture models for foreground segmentation,” in Proceedings of Image and Vision Computing New Zealand, pp. 267-271, 2002.
[35]張智星,“資料群聚與樣式辨認”, http://www.cs.nthu.edu.tw/~jang
[36]G. J. McLachlan and T. Krishnan, The EM Algorithm and Extensions, Wiley, 1997.
[37]N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton, “SMEM algorithm for mixture models,” Neural Computation, pp. 2109-2128, 2000.
[38]J. J. Verbeek, N. Vlassis, and B. Krose, “Efficient greedy learning of gaussian mixture models,” Neural Computation, pp. 469-485, 2003.
[39]M. A. T. Figueiredo and A. K. Jain, “Unsupervised learning of finite mixture models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 381-396, 2002.
[40]H. Wang and D. Suter, “A re-evaluation of mixture-of-gaussian background modeling,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, pp. 1017-1020, 2005.
[41]P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection,” in Proceedings of the European Workshop on Advanced Video Based Surveillance Systems, pp. 149-158, 2001.
[42]D. S. Lee, “Effective gaussian mixture learning for video background subtraction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 827-832, 2005.
[43]D. S. Lee, “Improved adaptive mixture learning for robust video background modeling,” in Proceedings of the International Association for Pattern Recognition Workshop on Machine Vision for Applications, pp. 443-446, 2002.
[44]D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization, Wiley-InterScience, 1992.
[45]M. Rosenblatt, “Remarks on some nonparametric estimates of a density function,” The Annals of Mathematical Statistics, vol. 27, pp. 832-837, 1956.
[46]E. Parzen, “On estimation of probability density function and mode,” The Annals of Mathematical Statistics, vol. 33, pp. 1065-1076, 1962.
[47]G. S. Watson and M. R. Leadbetter, “On the estimation of the probability density,” The Annals of Mathematical Statistics, vol. 34, pp. 480-491, 1963.
[48]D. O. Loftsgaarden and C. P. Quesenberry, “A nonparametric estimate of a multivariate density function,” The Annals of Mathematical Statistics, vol. 36, pp. 1049-1051, 1965.
[49]S. C. Schwartz, “Estimation of probability density by an orthogonal series,” The Annals of Mathematical Statistics, vol. 38, pp. 1262-1265, 1967.
[50]V. Epanechnikov, “Non-parametric estimation of a multivariate probability density,” Theory of Probability and its Applications, vol. 14, pp. 153-158.
[51]G. Wahba, “A polynomial algorithm for density estimation,” The Annals of Mathematical Statistics, vol. 42, pp. 1870-1886, 1971.
[52]Y. Sheikh and M. Shah, “Bayesian modeling of dynamic scenes for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 1778-1792, 2005.
[53]A. R. Webb, Statistical Pattern Recognition, Wiley, 2002.
[54]P. Viola, Alignment by maximization of mutual information, Ph. D. dissertation, Massachusetts Institute of Technology, 1995.
[55]T. Parag, A. Elgammal, and A. Mittal, “A framework for feature selection for background subtraction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1916-1923, 2006.
[56]R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley-InterScience, 2000.
[57]A. M. Elgammal, D. Harwood, and L. S. Davis, “Non-parametric model for background subtraction,” in Proceedings of the European Conference on Computer Vision, pp. 751-767, 2000.
[58]The University of Reading, “PETS datasets,” http://ftp.cs.rdg.ac.uk/
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top