跳到主要內容

臺灣博碩士論文加值系統

(3.235.120.150) 您好!臺灣時間:2021/08/03 07:04
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:嚴瑞友
研究生(外文):Jui-Yu Yen
論文名稱:在可變位元速率無線影像串流之有效物件偵測
論文名稱(外文):Effective Moving Object Detection Over Variable Bit-Rate Wireless Video Streaming
指導教授:黃士嘉黃士嘉引用關係
指導教授(外文):Shih-Chia Huang
口試委員:蔡偉和李宗演郭斯彥
口試委員(外文):Wei-Ho TsaiTrong-Yen LeeSy-Yen Kuo
口試日期:2012-07-04
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電腦與通訊研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:英文
論文頁數:34
中文關鍵詞:移動偵測視訊監控類神經網路
外文關鍵詞:Motion detectionvideo surveillancevariable bit rateneural network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:138
  • 評分評分:
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:1
移動物體偵測在自動化視訊監控系統中,是公認中最重要的一個功能。然而,
在不穩定的位元率影像的移動物體偵測是一個難解的問題,會產生不穩定的位元
率影像原因在於即時的視訊影像透過無線網路傳輸時經常會受到網路壅塞
(network congestion)或是不穩定的頻寬(unstable bandwidth)的影響,特別是在嵌入式應用程式,影像串流的位元率突然變化,容易造成移動物體誤判,因此在不穩定的位元率視訊影像中進行移動物體偵測是一個很困難的問題。本論文提出一個基於反傳遞網路類神經網路的移動物體偵測演算法以達到精確且完整的
偵測。此方法包含動態背景建立模組和移動目標擷取模組。首先藉由動態背景
建立模組建立不同位元率的多重背景模型,為了能充分代表不同位元率時的背景
特性。隨後,透過移動目標擷取模組:有效地擷取出不同位元率影像中的移動目
標將此方法的偵測結果和其它知名的方法比較,經過主客觀的分析,結果都顯
示本論文提出的方法有最好的效果。其中,提出的演算法之精確度在公正的評
估數據Similarity 和F1 比現行的方法分別高出最多83.34%和89.71%。

Motion detection plays an important role in video surveillance system. Video communications over wireless networks can easily suffer from network congestion or unstable bandwidth, especially for embedded application. A rate control scheme produces variable bit-rate video streams to match the available network bandwidth. However, effective detection of moving objects in variable bit-rate video streams is a very difficult problem. This paper proposes an advanced approach based on the counter-propagation network through artificial neural networks to achieve effective moving object detection in variable bit-rate video streams. The proposed method is composed of two important modules: a various background generation module and a moving object extraction module. The proposed various background generation module is employed in order to generate the adaptive background model which can express properties of variable bit-rate video streams. After an adaptive background model is generated by using the various background generation module, the proposed moving object extraction module is employed to detect moving objects effectively from both low-quality and high-quality video streams. Lastly, the binary motion detection mask can be generated as the detection result by the output value of the counter-propagation network. In this paper, we compare our method with other state-of-the-art methods. To demonstrate the performance of our proposed method in regard to object extraction, we analyze qualitative and quantitative comparisons in real-world limited bandwidth networks over a wide range of natural video sequences. The overall results show that our proposed method substantially outperforms other state-of-the-art methods by Similarity and F1 accuracy rates of 83.34% and 89.71%, respectively.

中文摘要 i
ABSTRACT ii
誌謝 iv
CONTENTS v
LIST OF TABLES vi
LIST OF FIGURES vii
Chapter 1 INTRODUCTION 1
Chapter 2 RELATED WORK 6
2.1Σ-Δ Estimation 6
2.2 Multiple Σ-Δ Estimation 7
2.3 Gaussian Mixture Model 8
2.4 Simple Statistical Difference 9
2.5 Multiple Temporal Difference 10
Chapter 3 PROPOSED CPN-BASED APPROACH 11
3.1 Various-background generation 13
3.2 Moving object detection 14
3.2.1 Motion block detection procedure 14
3.2.2 Moving object extraction procedure 15
Chapter 4 EXPERIMENTAL RESULTS 16
4.1 Quantitative Evaluation 17
4.2 Qualitative Evaluation 21
Chapter 5 CONCLUSIONS 29
REFERENCES 30

[1]Y. Durmus, A. Ozgovde, and C. Ersoy, “Distributed and Online Fair Resource Management in Video Surveillance Sensor Networks,” IEEE Trans. Mobile Computing, vol. 11, no. 5, pp. 835-848, May 2012.
[2]N. Buch, S.A. Velastin, and J. Orwell, “A Review of Computer Vision Techniques for the Analysis of Urban Traffic,” IEEE Trans. Intelligent Transp. Syst., vol. 12, no. 3, pp. 920-939, Sept. 2011.
[3]T.D. Raty, “Survey on Contemporary Remote Surveillance Systems for Public Safety,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 40, no. 5, pp. 493-515, Sept. 2010.
[4]S. Dockstader and M. Tekalp, “Multiple camera tracking of interacting and occluded human motion,” IEEE Proceedings , vol. 89, no. 10, pp. 1441-1455, Oct. 2001.
[5]C. Yuan, G. Medioni, J. Kang, and I. Cohen, “Detecting Motion Regions in the Presence of a Strong Parallax from a Moving Camera by Multiview Geometric Constraints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1627-1641, Sept. 2007.
[6]E. Stringa and C. S. Regazzoni, “Real-time Video-shot Detection for Scene Surveillance Application,” IEEE Trans. Image Process., vol. 9, no. 1, pp. 69-79, Jan. 2000.
[7]F. C. Cheng and S. J. Ruan, “Accurate Motion Detection Using a Self-Adaptive Background Matching Framework,” IEEE Trans. Intel. Transp. Syst., no. 99, pp. 1-9, Nov. 2011.
[8]P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-Time Security Monitoring Around a Video Surveillance Vehicle With a Pair of Two-Camera Omni-Imaging Devices,” IEEE Trans. Vehicular Technol., vol. 60, no. 8, pp. 3603-3614, Oct. 2011.
[9]C. Micheloni, G. L. Foresti, C. Piciarelli, and L. Cinque, “An Autonomous Vehicle for Video Surveillance of Indoor Environments,” IEEE Trans. Vehicular Technol., vol. 56, no. 2, pp. 487-498, March 2007.
[10]L. Snidaro, N. Ruixin, G. L. Foresti, and P. K. Varshney, “Quality-Based Fusion of Multiple Video Sensors for Video Surveillance,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 4, pp. 1044-1051, Aug. 2007.
[11]T. Celik, and H. Kusetogullari, “Solar-Powered Automated Road Surveillance System for Speed Violation Detection,” IEEE Trans. Industrial Electronics, vol. 57, no. 9, pp. 3216-3227, Sept. 2010.
[12]D. Wu, S. Ci, H. Luo, Y. Ye, and H. Wang, “Video Surveillance Over Wireless Sensor and Actuator Networks Using Active Cameras,” IEEE Trans. Automatic Control, vol. 56, no. 10, pp. 2467-2472, Oct. 2011.
[13]G. Gualdi, A. Prati, and R. Cucchiara, “Video Streaming for Mobile Video Surveillance,” IEEE Trans. Multimedia, vol. 10, no. 6, pp. 1142-1154, Oct. 2008.
[14]G. L. Foresti, “Real-time system for video surveillance of unattended outdoor environments,” IEEE Trans. Circuits Syst. Video Technol., vol. 8, no. 6, pp. 697-704, Oct. 1998.
[15]X. Liu and K. Fujimura, “Pedestrian detection using stereo night vision,” IEEE Trans. Vehicular Technol., vol. 53, no. 6, pp. 1657-1665, Nov. 2004.
[16]N. Habili, C. C. Lim, and A. Moini, “Segmentation of the face and hands in sign language video sequences using color and motion cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 8, pp. 1086-1097, Aug. 2004.
[17]C. Yuan, G. Medioni, J. Kang, and I. Cohen, “Detecting Motion Regions in the Presence of a Strong Parallax from a Moving Camera by Multiview Geometric Constraints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1627-1641, Sept. 2007.
[18]I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 809-830, Aug. 2000.
[19]M. Kafai and B. Bhanu, “Dynamic Bayesian networks for vehicle classification in video,” IEEE Trans. Ind. Inform., vol. 8, no. 1, pp. 100-109, Feb. 2012.
[20]W. Hu, T. Tan, L.Wang, and S.Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 34, no. 3, pp. 334-352, Aug. 2004.
[21]D. Gibson and M. Spann, “Robust optical flow estimation based on a sparse motion trajectory set,” IEEE Trans. Image Process., vol. 12, no. 4, pp. 431-445, April 2003.
[22]A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” Proc. IEEE Workshop Applications of Computer Vision, pp. 8-14, 1998
[23]S. Chen, J. Zhang, Y. Li, and J. Zhangl, “A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction,” IEEE Trans. Ind. Inform., vol. 8, no. 1, pp. 118-127, Feb. 2012.
[24]F. C. Cheng, S. C. Huang, and S. J. Ruan, “Advanced Motion Detection for Intelligent Video Surveillance Systems,” ACM Proceedings, Symposium of Applied Computing (SAC), 2010, pp. 22-26.
[25]S. C. Huang, “An Advanced Motion Detection Algorithm with Video Quality Analysis for Video Surveillance Systems,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 1, pp. 1-14, Jan. 2011.
[26]F. C. Cheng, S. C. Huang and S. J. Ruan, “Advanced background subtraction approach using Laplacian distribution model,” IEEE Int. Conf. Multimedia & Expo (ICME), 2010, pp. 754-759.
[27]A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 302-309, 2004.
[28]C. C. Chiu, M. Y. Ku, and L. W. Liang, “A Robust Object Segmentation System Using a Probability-Based Background Extraction Algorithm,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 4, pp. 518-528, April 2010.
[29]Du. M. Tsai and S. C. Lai, “Independent Component Analysis-Based Background Subtraction for Indoor Surveillance,” IEEE Trans. Image Process., vol. 18, no. 1, pp. 158-167, Jan. 2009.
[30]A. Manzanera and J. C. Richefeu, “A robust and computationally efficient motion detection algorithm based on Σ–△ background estimation,” Proc. ICVGIP’04, pp. 46-51, 2004.
[31]A. Manzanera and J. C. Richefeu, “A new motion detection algorithm based on Σ–△ background estimation,” Pattern Recognit. Lett., vol. 28, pp. 320-328, Feb. 2007.
[32]D. Zhou and H. Zhang, “Modified GMM background modeling and optical flow for detection of moving objects,” Int. Conf. on Systems, Man, and Cybernetics, vol. 3, pp. 2224-2229, 2005.
[33]P. M. Jodoin, M. Mignotte, and J. Konrad, “Statistical Background Subtraction Using Spatial Cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no 12, pp. 1758 -1763, Dec. 2007.
[34]J. E. Ha and W. H. Lee, “Foreground objects detection using multiple difference images,” Optical Engineering, vol. 49, no. 4, pp. 047-201, Apr. 2010.
[35]Q. Li, Y. Andreopoulos, and M. van der Schaar, “Streaming-Viability Analysis and Packet Scheduling for Video Over In-Vehicle Wireless Networks,” IEEE Trans. Vehicular Technol., vol. 56, no. 6, pp. 3533-3549, Nov. 2007.
[36]S. C. Liew, and D. C. Y. Tse, “A control-theoretic approach to adapting VBR compressed video for transport over a CBR communications channel,” IEEE/ACM Trans. Networking, vol. 6, no. 1, pp. 42-55, Feb. 1998.
[37]M. Frey and N. Q. Son, “A gamma-based framework for modeling variable-rate MPEG video sources: the GOP GBAR model,” IEEE/ACM Trans. Networking, vol. 8, no. 6, pp. 710-719, Dec. 2000.
[38]H. Kanakia, P. P. Mishra, and A. R. Reibman, “An adaptive congestion control scheme for real time packet video transport,” IEEE/ACM Trans. Networking, vol. 3, no. 6, pp. 671-682, Dec. 1995.
[39]L. Atzori, M. Krunz, and M. Hassan, “Cycle-Based Rate Control for One-Way and Interactive Video Communications Over Wireless Channels,” IEEE Trans. Multimedia, vol. 9, no. 1, pp. 176-184, Jan. 2007.
[40]M. van der Schaar, Y. Andreopoulos, and Z. Hu, “Optimized scalable video streaming over IEEE 802.11 a/e HCCA wireless networks under delay constraints,” IEEE Trans Mobile Computing, vol. 5, no. 6, pp. 755-768, June 2006.
[41]J. Zou, H. Xiong, C. Li, R. Zhang, and Z. He, “Lifetime and Distortion Optimization With Joint Source/Channel Rate Adaptation and Network Coding- Based Error Control in Wireless Video Sensor Networks,” IEEE Trans. Vehicular Technol., vol. 60, no. 3, pp. 1182-1194, March 2011.
[42]Z. He, Y. Liang, L. Chen, I. Ahmad, and D. Wu, “Power-rate-distortion analysis for wireless video communication under energy constraints,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 5, pp. 645-658, May 2005.
[43]Q. Zhang, W. Zhu, and Y. Q. Zhang, “Channel-adaptive resource allocation for scalable video transmission over 3G wireless network,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 8, pp. 1049-1063, Aug. 2004.
[44]T. Schierl, T. Stockhammer, and T. Wiegand, “Mobile Video Transmission Using Scalable Video Coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1204-1217, Sept. 2007.
[45]S. Milani and G. Calvagno, “A Low-Complexity Cross-Layer Optimization Algorithm for Video Communication Over Wireless Networks,” IEEE Trans. Multimedia, vol. 11, no. 5, pp. 810-821, Aug. 2009.
[46]Y. Liu, Z. G. Li, and Y. C. Soh, “Region-of-Interest Based Resource Allocation for Conversational Video Communication of H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no 1, pp. 134-139, Jan. 2008.
[47]C. W. Seo, J. K. Han, and T.Q. Nguyen, “Rate Control Scheme for Consistent Video Quality in Scalable Video Codec,” IEEE Trans. Image Processing, vol. 20, no. 8, pp. 2166-2176, Aug. 2011.
[48]Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification,” 2003, ITU-T Rec. H.264 — ISO/IEC 14496-10 AVC.
[49]T. Wiegand, G. J. Sullivan, G. Bjntegaard, and A. Luthra, “Overview of the H.264/AVC video coding standard,” IEEE Trans. Circuits Syst. Video Technol. , vol. 13, no. 7, pp. 560576, July 2003.
[50]W. L. Buntine and A. S. Weigend, “Computing second derivatives in feed-forward networks: a review,” IEEE Trans. Neural Networks , vol. 5, no. 3, pp. 480-488, May 1994.
[51]P. F. Baldi and K. Hornik, “Learning in linear neural networks: a survey,” IEEE Trans. Neural Networks , vol. 6, no. 4, pp. 837-858, Jul. 1995.
[52]H.264/AVC Reference Software JM [Online]. Available: http://bs.hhi.de/ suehring/tml/


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊