跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.17) 您好!臺灣時間:2025/09/03 06:56
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:蕭宏儒
研究生(外文):Hong-Zu Xiao
論文名稱:視訊監視系統之環境背景模式建構與維護
論文名稱(外文):Background Model Construction and Maintenance in a Video Surveillance System
指導教授:曾逸鴻曾逸鴻引用關係
指導教授(外文):Yi-Hong Tseng
學位類別:碩士
校院名稱:大葉大學
系所名稱:資訊管理學系碩士班
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2005
畢業學年度:93
語文別:中文
論文頁數:58
中文關鍵詞:背景模型背景相減移動物體偵測去除陰影
外文關鍵詞:background modelbackground subtractionmoving object detectionshadow elimination
相關次數:
  • 被引用被引用:0
  • 點閱點閱:173
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在電腦視覺的領域中,應用背景相減於移動物體的偵測一直是有效且被廣為採用的方法,然而偵測的效果在實際的應用上常因背景環境的變化而減少其正確性,在許多研究中都指出,若可在不受限制的環境中建立適當的背景模型,將對移動物體的偵測有很大的幫助。因此,本研究的目標就是要在正常光線下,針對常見的環境變異情況,建構並維護一個健全的背景模型。利用此背景模型對視訊中各畫面進行背景相減(background subtraction),並考慮陰影特性去除陰影,抽取理想的前景偵測結果,以利開發後續智慧型視覺監控系統之技術。
本論文所提的背景模型建構與維護方法主要分成三個架構:一、初始化背景模型:自動建立起始背景模型以及後續背景相減時所需的門檻值。二、訓練背景模型:透過不斷學習背景細微變化,維護背景模型與背景的一致性。三、置換背景模型:一旦發現較大的背景變化,則持續收集畫面影像,並根據環境的敏感性,作為置換的依據。我們使用這樣的方式建立一套線上移動物體偵測系統,並以多種狀況的環境變化視訊測試均顯示此研究所提方法之實用性。
Background subtraction is a useful and effective method for detecting moving objects in computer-vision applications. However, the variant environments make the detection result to be unsatisfactory. In order to improve the detection accuracy, an appropriate background model must be constructed and maintained to accommodate to the changed environment. In this research, a robust background model maintenance mechanism is proposed and used to implement a moving object detection module.
The proposed mechanism includes three phases: initial background model construction, sustained background model adjustment, rapid background model replacement. In the first phase, an initial background model is constructed from an unfiltered video stream. In the second phase, the background model is adjusted continually according to the gradual change of environment. If a sudden change of environment occurs, the current background model must be replaced rapidly by a new background model in the third phase. The new background model is trained from some video frames collected in the second phase.
Finally, a moving object detection system is performed by applying a background subtraction approach and a shadow elimination technique. By examining in several variant environments, the Fine experimental results illustrate the practicability of the proposed background model maintenance mechanism.
中文摘要iv
英文摘要v
誌謝vi
目錄vii
圖目錄ix
表目錄xi

第一章 緒論1
1.1 研究背景與動機1
1.2 研究目的2
1.3 研究限制3
第二章 文獻探討5
2.1 分離前景與背景7
2.2 建構背景模型9
2.3 維護背景模型12
2.4 建構及維護背景模型的問題13
第三章 建構及維護背景模型15
3.1 初始化背景模型15
3.2 背景相減19
3.3 差異圖分析20
3.4 維護背景模型22
第四章 置換背景模型25
4.1 記錄圖分析28
4.2 建構固定背景模型32
4.3 建構擺動背景模型33
4.4 因應背景模型結構改變的方法調整37
第五章 偵測移動物體38
5.1 物體的偵測39
5.2 物體的移動40
5.3 去除影子41
第六章 實驗與評估44
6.1 實驗討論44
6.2 方法評估49
第七章 結論與未來展望53
參考文獻54
1. J. K. Aggarwal and Q. Cai, “Human motion analysis: a review,” Computer Vision and Image Understanding, vol. 73, no. 3, pp. 428-440, 1999.
2. D. M. Gavrila, “The visual analysis of human movement: a survey,” Computer Vision and Image Understanding, vol. 73, no. 1, pp. 82-98, 1999.
3. T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231-268, 2001.
4. L. Wang, W. Hu, and T. Tan, “Recent developments in human motion analysis,” Pattern Recognition, vol. 36, no. 3, pp. 585-601, 2003.
5. H. S. Sawhney, Y. Guo, and R. Kumar, “Independent motion detection in 3D scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1191- 1199, 2000.
6. N. Paragios and R. Deriche, “Geodesic active contours and level sets for the detection and tracking of moving objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 3, pp. 266-280, 2000.
7. S. Fejes and L. S. Davis, “Detection of independent motion using directional motion estimation,” Computer Vision and Image Understanding, vol. 74, no. 2, pp. 101-120, 1999.
8. M. G. Grant, M. S. Nixon, and P. H. Lewis, “Extracting moving shapes by evidence gathering,” Pattern Recognition, vol. 35, no. 5, pp. 1099-1114, 2002.
9. S. Khan and M. Shah, “Consistent labeling of tracked objects in multiple cameras with overlapping fields of view,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1355-1360, 2003.
10. D. S. Jang, S. W. Jang, and H. I. Choi, “2D human body tracking with Structural Kalman filter,” Pattern Recognition, vol. 35, no. 10, pp. 2041-2049, 2002.
11. D. S. Jang, and H. I. Choi, “Active models for tracking moving objects,” Pattern Recognition, vol. 33, no. 7, pp. 1135-1146, 2000.
12. M. J. Beal, N. Jojic, and H. Attias, “A graphical model for audiovisual object tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp. 828-836, 2003.
13. D. Cunado, M. S. Nixon, and J. N. Carter, “Automatic extraction and description of human gait models for recognition purposes,” Computer Vision and Image Understanding, vol. 90, no. 1, pp. 1-41, 2003.
14. J. K. Aggarwal, Q. Cai, W.Liao, and B. Sabata, “Nongrid motion analysis: articulated and elastic motion,” Computer Vision and Image Understanding, vol. 70, no. 2, pp. 142-156, 1998.
15. S. Sarkar, D. Majchrzak, and K. Korimilli, “Perceptual organization based computational model for robust segmentation of moving objects,” Computer Vision and Image Understanding, vol. 86, no. 3, pp. 141-170, 2002.
16. L. Wang, T. N. Tan; H. H. Ning, and W. M. Hu, “Silhouette analysis-based gait recognition for human identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1505-1518, 2003.
17. J. Badenas, J. M Sanchiz, and F. Pla, “Motion-based segmentation and region tracking in image sequences,” Pattern 2Recognition, vol. 34, no. 3, pp. 661-670, 2001.
18. C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747-757, 2000.
19. I. Haritaoglu, R. Cutler, D. Harwood, and L. S. Davis, “Backpack: detection of people carrying objects using silhouettes,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 385-397, 2001.
20. M. Brand and V. Kettnaker, “Discovery and segmentation of activities in video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 844-851, 2000.
21. N. M. Oliver, B. Rosario, and A. P. Pentland, “A Bayesian computer vision system for modeling human interactions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 831-843, 2000.
22. T. Wada and T. Matsuyama, “Multiobject behavior recognition by event driven selective attention method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 873-887, 2000.
23. Y. A. Ivanov, A. F. Bobick, “Recognition of visual activities and interactions by stochastic parsing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 852-872, 2000.
24. M. Seki, H. Fujiwara, and K. Sumi, “A Robust Background Subtraction Method for Changing Background,” Proceeding of Fifth IEEE Workshop on Applications of Computer Vision, pp. 207–213, 2000.
25. N. Ohta. “A statistical approach to background subtraction for surveillance systems,” International Conference on Computer Vision, vol. 2, pp. 481-486, 2001.
26. Q. Zang, R. Klette, “Robust background subtraction and maintenance,” Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, pp.90-93C, 2004.
27. K. Dawson-Howe, “Active surveillance using dynamic background subtraction,” Technical Report of Trinity College, TCD-CS96-06, 1996.
28. A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” Proceedings of the 6th European Conference on Computer Vision-Part II, pp.751-767, 2000.
29. R. T. Collins, et al., “A system for video surveillance and monitoring: VSAM Anal report,” Technical Report of Carnegie Mellon University, CMU-RI-TR-00-12, 2000.
30. A. J. Lipton, H. Fujiyoshi, R. S. Patil, “Moving target classification and tracking from real-time video,” Proceedings of the IEEE Workshop on Applications of Computer Vision, pp. 8-14, 1998.
31. H. A. Rowley, J. M. Rehg, “Analyzing articulated motion using expectation-maximization,” Proceedings of the International Conference on Pattern Recognition, pp. 935-941, 1997.
32. K. Karmann and A. V. Brandt, “Moving object recognition using an adaptive background memory,” Time-Varying Image Processing and Moving Object Recognition II, pp. 289-296, 1990.
33. C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
34. B. Gloyer, H. K. Aghajan, K. Y. Siu, and T. Kailath, “Video-based freeway monitoring system using recursive vehicle tracking,” Proceeding of Symposium on Electronic Imaging, vol. 2421, pp. 173-80, 1995.
35. K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” Proceedings of the 7th IEEE International Conference on Computer Vision, vol. 1, pp. 255-261, 1999.
36. C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 246-252, 1999.
37. W. Long and Y. H. Yang, “Stationary background generation: An alternative to the difference of two images,” Pattern Recognition, vol. 23, no. 12, pp. 1351-1358, 1990.
38. D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons, A. K. Jain, “A background model initialization algorithm for video surveillance”, Proceedings of Eighth IEEE International Conference on Computer Vision, vol. 1, pp. 733-740, 2001.
39. C. Stauffer, W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246-252, 1999.
40. P. Kumar, S. Ranganath, W. Huang, “Queue based fast background modeling and fast hysteresis thresholding for better foreground segmentation,” Processing of Joint Conference of the Fourth International Conference on Information, Communications and Signal, and the Fourth Pacific Rim Conference on Multimedia, vol. 2, pp. 743-747, 2003.
41. A. K. Jain, M. N. Murty, P. J. Flynn, “Data clustering: a review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264-323, 1999.
42. A. Prati, I. Mikic, C. Grana, M. M. Trivedi, “Shadow detection algorithm for traffic flow analysis: a comparative study,” Proceedings of IEEE Intelligent Transportation System Conference, pp. 340-345, 2001.
43. E. Salvador, A. Cavallaro, T. Ebrahimi, “Cast shadow segmentation using invariant color features,” Computer Vision and Image Understanding, vol. 95, no. 2, pp. 238-259, 2004.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top