跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.17) 您好!臺灣時間:2025/09/03 06:56
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳明杰
研究生(外文):WU, MING-CHIEH
論文名稱:以視訊為基礎之營區安全監控系統設計
論文名稱(外文):Secure Video-Based Surveillance System Design For Barrack
指導教授:王順吉王順吉引用關係
指導教授(外文):WANG,SHUENN-JYI
口試委員:蔡宗憲周兆龍賈叢林鄭旭詠
口試委員(外文):TSAI, CHUNG-HSIENZHOU, ZHAO-LONGJIA, CONG-LINZHENG, XU-YONG
口試日期:2018-05-09
學位類別:碩士
校院名稱:國防大學理工學院
系所名稱:資訊工程碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:92
中文關鍵詞:智慧型監控系統前景偵測異常行為偵測營區安全
外文關鍵詞:Intelligent surveillance systemsForeground detectionAnomaly behavior detectionBarrack security
相關次數:
  • 被引用被引用:1
  • 點閱點閱:281
  • 評分評分:
  • 下載下載:25
  • 收藏至我的研究室書目清單書目收藏:0
國軍近年來人力大幅精簡,造成營區巡管人力不足,危安風險攀升。現有監控系統無法克服複雜環境之外在干擾,造成誤警率過高,致使無法實際適用於營區安全防護。智慧型視訊監控系統主要使用前景偵測及異常行為偵測技術,但監控系統場景環境多變,不同偵測方法各有其適用之場景。本論文提出整合前景偵測及異常行為偵測技術,並利用機器學習分類方法,實現智慧型監控系統,以適用於營區安全防護。前景偵測採用融合式前景偵測技術,可彈性更換及融合各種前景偵測技術,以適用於各種不同監控環境,能更正確擷取前景物件。接續萃取其影像特徵,並運用支持向量機(Support Vector Machine, SVM)技術完成行為分類建模。同時透過本文設計之整合介面可方便驗證各種前景偵測技術之融合結果,以及定義與偵測異常行為,所以基於本文的方法流程,各監控攝影機之視訊影像經過訓練過程建模後,即可具備偵測特定異常行為之功能。藉由本論文的研究成果,可減輕營區監控人員作業負荷,精簡巡管人力,並提升早期預警能力與應變處置反應時間。
In recent years, the manpower of the military has been downsizing, resulting in insufficient manpower for patrolling barracks and a rising risk of dangers. The existing monitoring system cannot overcome the interference outside the complex environments. Therefore, it is impossible to practically apply such system to protect barracks due to excessively high false alarm rates. An intelligent surveillance system will apply foreground detection and abnormal behavior detection technologies. However, different monitoring environments need various appropriate detection approaches. In this thesis, we design an intelligent surveillance system to integrate foreground detection and abnormal behavior detection schemes. In the foreground detection step, a fusion method, which can flexibly fuse and substitute various foreground detection technologies, is used to adapt to various monitoring environments and extract foreground objects exactly. In the abnormal behavior detection step, the features of foreground objects are extracted and the support vector machine scheme is use to construct the behavior models. Through the designed interface of the proposed system, the various combinational fusion results can be easily verified, and various abnormal behaviors can be defined and detected conveniently. Furthermore, the workload and manpower of barrack security will be simplify, and there has more sufficient reacting time because of the early warning alarms.
誌謝 ii
摘要 iii
Abstract iv
表目錄 viii
圖目錄 ix
1. 前言 1
1.1 研究背景與動機 1
1.2 研究目的 5
1.3 論文架構 6
2. 文獻探討 7
2.1. 視訊監控系統發展 7
2.1.1. 類比式視訊監控系統 7
2.1.2. 數位式視訊監控系統 8
2.1.3. 智慧型視訊監控系統 9
2.2. 前景偵測技術 14
2.3. 異常行為分析 26
2.3.1. 影像特徵萃取 26
2.3.2. 行為建模 30
3. 智慧型監控系統設計 34
3.1. 系統架構 34
3.2. 方法流程 36
3.2.1. 前景偵測模組 37
3.2.2. 行為分析模組 43
3.3. 系統資料流程 46
3.4. 異常行為場景介紹與應用效益 48
3.5. 小結 53
4. 實驗結果與分析 54
4.1. 前景偵測結果比較 54
4.2. 異常行為偵測實驗 64
4.3. 系統介面功能與設計理念 73
4.4. 小結 82
5. 結論與未來研究 83
6. 參考文獻 86
自傳 92

[1]http://news.ltn.com.tw/news/society/breakingnews/1399035.(2017.11.30)
[2]http://www.chinatimes.com/realtimenews/20170413005049-260402.(2017.11.30)
[3]Afsar, P., Cortez, P., and Santos, H., “Automatic Visual Detection of Human Behavior: A Review from 2000 to 2014,” Expert Systems with Applications, Vol. 42, No. 20, pp. 6935-6956, 2015.
[4]Popoola, O. P. and Wang, K., “Video-Based Abnormal Human Behavior Recognition—A Review,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, Vol. 42, No. 6, pp. 865-878, 2012.
[5]Borges, P. V. K., Conci, N., and Cavallaro, A., “Video-Based Human Behavior Understanding: A Survey,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 23, No. 11, pp. 1993-2008, 2013.
[6]BenMabrouk, A. and Zagrouba, E., “Abnormal Behavior Recognition for Intelligent Video Surveillance Systems: A Review,” Expert Systems with Applications, Vol. 91, pp. 480–491, 2018.
[7]Sodemann, A. A., Ross, M. P., and Borghetti, B. J., “A Review of Anomaly Detection in Automated Surveillance,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, Vol. 42, No. 6, pp. 1257-1272, 2012.
[8]https://www.ithome.com.tw/node/76084.(2017.11.30)
[9]http://www.idsmag.com.tw/ids/new_article.asp?ar_id=30312.(2017.11.30).
[10]https://www.darpa.mil/program/insight.(2017.11.30).
[11]https://www.qrcodematrix.com/military-unmanned-ground-vehicles/. (2017.11.30)
[12]http://signal-processing.mil-embedded.com/articles/dsp-fpga-embedded-vision-market/.(2017.11.30)
[13]https://www.marketsandmarkets.com/Market-Reports/video-surveillance-market-645.html.(2017.11.30)
[14]https://www.itproportal.com/2014/04/16/aisight-the-surveillance-network-completely-run-by-ai/.(2017.11.30)
[15]https://www.hksilicon.com/articles/502821.(2017.11.30).
[16]https://buzzorange.com/techorange/2017/10/26/umbo-computer-vision/.(2017.11.30)
[17]Dalal, N., “Finding People in Images and Videos,” Ph.D. Dissertation, Institut National Polytechnique de Grenoble-INP, Grenoble, France. 2006.
[18]Krizhevsky, A., Sutskever, I., and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks,” Advances In Neural Information Processing Systems, pp. 1097-1105, 2012.
[19]Jodoin, P. M., Maddalena, L., Petrosino, A., and Wang, Y. “Extensive Benchmark and Survey of Modeling Methods for Scene Background Initialization,” IEEE Transactions on Image Processing, Vol. 26, No. 11, pp. 5244–5256, 2017.
[20]Brutzer, S., Höferlin, B., and Heidemann, G., “Evaluation of Background Subtraction Techniques for Video Surveillance,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1937-1944, 2011.
[21]Stauffer, C. and Grimson, W. E. L., “Adaptive Background Mixture Models for Real-Time Tracking,” in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Vol. 2, No. 9, pp. 1633-1638, 1999.
[22]Hou, A. L., Guo, J. L., Wang, C. J., Wu, L., and Li, F., “Abnormal Behavior Recognition Based on Trajectory Feature and Regional Optical Flow,” in Image and Graphics (ICIG), 2013 Seventh International Conference on, pp. 643-649, 2013.
[23]Horn, B. K. P., and Schunck, B. G., “Determining Optical Flow,” Artificial Intelligence, Vol. 17, No. 1-3. pp. 185-203, 1981.
[24]Lucas, B. D., and Kanade, T., “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Proceedings of the 7th international joint conference on Artificial intelligence, Vol. 2, pp. 674-679, 1981.
[25]Li, W., Mahadevan, V., and Vasconcelos, N., “Anomaly Detection and Localization in Crowded Scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, No. 1, pp. 18-32, 2014.
[26]Reddy, V., Sanderson, C., and Lovell, B. C., “Improved Anomaly Detection in Crowded Scenes via Cell-Based Analysis of Foreground Speed, Size and Texture,” in CVPR 2011 WORKSHOPS, pp. 55-61, 2011.
[27]Li, W., Wu, X., Matsumoto, K., and Zhao, H. A., “Foreground Detection Based on Optical Flow and Background Subtract,” in 2010 International Conference on Communications, Circuits and Systems, ICCCAS 2010 - Proceedings, pp. 359-362, 2010.
[28]Suganyadevi, K., Malmurugan, N., and Sivakumar, R., “OF-SMED: An Optimal Foreground Detection Method in Surveillance System for Traffic Monitoring,” in Proceedings 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensic, CyberSec 2012, pp. 12-17, 2012.
[29]Heikkilä, M. and Pietikäinen, M., “A Texture-Based Method for Modeling the Background and Detecting Moving Objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 4, pp. 657-662, 2006.
[30]Barnich, O., and VanDroogenbroeck,M., “ViBe: A Universal Background Subtraction Algorithm for Video Sequences,” IEEE Transactions on Image Processing, Vol. 20, No. 6, pp. 1709–1724, 2011.
[31]Wang, R., Bunyak, F., Seetharaman, G., and Palaniappan, K., “Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2014, pp. 414-418.
[32]St-Charles, P. L., Bilodeau, G. A., and Bergevin, R., “SuBSENSE: A Universal Change Detection Method with Local Adaptive Sensitivity,” IEEE Transactions on Image Processing : a Publication of the IEEE Signal Processing Society, Vol. 24, No. 1, pp. 359-73, 2015.
[33]Sajid, H. and Cheung, S. C. S. “Universal Multimode Background Subtraction,” IEEE Transactions on Image Processing, Vol. 26, No. 7, pp. 3249–3260, 2017.
[34]https://www.behance.net/gallery/3943089/BGS-Library-A-Background-Subtraction-Library. (2017.12.6).
[35]Sobral, A., and Vacavant, A. “A Comprehensive Review of Background Subtraction Algorithms Evaluated with Synthetic and Real Videos,” Computer Vision and Image Understanding, Vol. 122, No. May, pp. 4–21, 2014.
[36]Chan, Y. T., Wang, S. J., and Tsai, C. H. “Real-Time Foreground Detection Approach Based on Adaptive Ensemble Learning with Arbitrary Algorithms for Changing Environments,” Information Fusion, Vol. 39, pp. 154–167, 2018.
[37]Chan, Y. T., Wang, S. J., and Tsai, C. H. “Extracting Foreground Ensemble Features to Detect Abnormal Crowd Behavior in Intelligent Video-Surveillance Systems,” Journal of Electronic Imaging, Vol. 26, No. 5, p. 51402, 2017.
[38]Cortes, C. andV.Vapnik, “Support-Vector Networks,” Machine Learning, Vol. 20, No. 3, pp. 273–297, 1995.
[39]Cao, D., Masoud, O. T., Boley,D., and Papanikolopoulos, N., “Online Motion Classification Using Support Vector Machines,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, pp. 2291–2296, 2004.
[40]Maglogiannis, I. G., Emerging Artificial Intelligence Applications In Computer Engineering : Real Word AI Systems With Applications In EHealth, HCI, Information Retrieval And Pervasive Technologies., pp. 3-21, 2007.
[41]Gonzalez, R. C., and Woods,R. E., Digital Image Processing., Prentice Hall, 2010.
[42]張智星, MATLAB程式設計進階篇(第二版)., 碁峰出版社, 2013.
[43]http://www.changedetection.net/.(2017.12.6)
[44]McFarlane, N. J. B., and Schofield, C. P., “Segmentation and Tracking of Piglets in Images,” Machine Vision and Applications, Vol. 8, No. 3, pp. 187–193, 1995.
[45]Yao, J., and Odobez, J.-M., “Multi-Layer Background Subtraction Based on Color and Texture,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007.
[46]St-Charles, P. L., Bilodeau, G. A., and Bergevin, R., “Universal Background Subtraction Using Word Consensus Models,” IEEE Transactions on Image Processing, Vol. 25, No. 10, pp. 4768–4781, 2016.
[47]http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm.(2017.11.30)



QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top