跳到主要內容

臺灣博碩士論文加值系統

(100.26.176.111) 您好!臺灣時間:2024/07/16 14:41
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:夏郁普
研究生(外文):Yu-Pu Hsia
論文名稱:車門開啟前來車影像偵測系統之開發
論文名稱(外文):Development of an Image Detecting System for Approaching Vehicles before Car Door Opening
指導教授:黃昌群
指導教授(外文):Chang-Chiun Huang
口試委員:郭中豐湯燦泰
口試委員(外文):Chung-Feng KuoTsann-tay Tang
口試日期:2018-07-30
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:材料科學與工程系
學門:工程學門
學類:材料工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:113
中文關鍵詞:移動物件追蹤背景相減ViBe演算法改良ViBe演算法支持向量機
外文關鍵詞:moving object detectionbackground subtractionViBe algorithmmodified ViBe algorithmsupport vector machine
相關次數:
  • 被引用被引用:0
  • 點閱點閱:193
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文所探討的議題為許多汽車的車主在開車門時未注意到後方的機車或汽車,而發生車門與車輛碰撞的意外,每年都造成了許多死傷。因此本研究提出一套影像處理方法,應用於白天且天氣良好的情況偵測移動物件,一旦移動物件進到危險的開門範圍,就會傳遞訊號給車子的系統,迅速將車門鎖上;其影像處理過程分別是影像前處理、移動物件偵測、改良ViBe演算法、物件分類和偵測危險。影像前處理使用中值濾波器處理原始影像,讓雜訊降低;移動物件偵測提出以ViBe的演算法取代傳統的連續影像相減法和背景模型法,解決影像輪廓缺失的問題;改良ViBe演算法解決了拖影區的問題,並且加快了背景更新的速度;物件分類的部分,使用了支持向量機來快速獲取結果,危險偵測使用像素和實際距離的轉換定義出危險碰撞範圍。本研究採用不同背景所提出的1367筆移動物件資料進行測試,檢測單張影像的速度僅需0.15秒~0.2秒,成功在危險碰撞範圍前偵測出移動物件的比率為100%。再利用其中的800筆汽機車資料進行分類,得到正確分類率為97.86%,汽車和機車的正確分類率分別為94.62%和97.51%。
In our research, we focus on the accident when people get out off car, but they didn’t notice the coming object, such as cars and motorcycles. As a result, the object crash into the car door and both of drivers will injure badly. This kind of accident happen every day and cause lots of people die. Therefore, we build a system of image processing which can apply to detect objects. Once any object in the dangerous zone, the system will submit signal to car system and lock car door. The process of system can divide into image preprocessing, moving object detection, ViBe algorithm, modified ViBe algorithm and object classifying. Image preprocessing part will use median filter to original image and make noises less. moving object detection part will use ViBe algorithm instead of frame differencing and background model. This algorithm has good ability about object contour detecting. Modified ViBe algorithm solves the ghost area problem and speeds up the period of background updating. Object classifying uses the support vector machine to obtain the final result.
This research has 1367 data which are based on eight different backgrounds. All the backgrounds have different street, different car and different motor. Due to the result, it only takes 0.2 seconds to process one frame. The accuracy rate of detecting object is up to 100%. The accuracy rate of classifying object is about 97.86%. The accuracy rate of classifying car and motor are 94.62% and 97.51% individually.
摘要 I
Abstract II
誌謝 III
目錄 IV
圖索引 VIII
表索引 X
第一章 緒論 1
1.1研究背景與動機 1
1.2文獻回顧 3
1.2.1 移動物件偵測 3
1.2.2 影像特徵提取 9
1.2.3 物件分類 10
1.3 研究目的 11
1.4 研究流程 11
第二章 影像處理方法介紹 13
2.1 低通濾波器 13
2.1.1 平均值濾波器 13
2.1.2 中值濾波器 14
2.2 影像形態學 15
2.2.1 侵蝕 15
2.2.2 膨脹 16
2.2.3 閉合 17
2.2.4 斷開 18
2.2.5 連通元件標記法 19
2.3 影像分割 21
2.3.1 轉換色彩空間 21
2.3.2 切割影像 22
2.3.3 Otsu法 22
2.4 移動物件偵測 23
2.4.1 ViBe演算法 23
2.4.2 改良ViBe演算法 26
2.5影像特徵 29
2.5.1面積與周長 29
2.5.2寬高比 30
2.5.3 面積中心座標差 30
2.6 物件分類 30
2.6.1 支持向量機 31
2.7 混淆矩陣 35
2.8 交叉驗證 36
2.8.1 K次交叉驗證 37
2.9 邊緣偵測 38
第三章 實驗儀器介紹和系統架構 39
3.1.1 實驗儀器 39
3.1.2 實驗架設 40
3.2 系統架構 42
第四章 實驗規劃與結果討論 44
4.1 影像前處理 46
4.2 轉換色彩空間和影像切割 47
4.3 背景提取 48
4.4 候選區域處理 50
4.5 改良VIBE 演算法 51
4.6 物件分類 53
4.7判別危險 58
第五章 結果與討論 62
5.1 實驗限制條件 62
5.2 核函數比較 63
5.3 影像處理方法比較 64
5.4車門防撞系統比較 66
第六章 結論 68
參考文獻 62
附錄A:汽機車詳細樣本 69
附錄B:不同情況影像處理過程 91
1. C. Jonas, D. Thakur, P. Dames , C. Phillips, T. Kientz, K. Daniilidis , “Automated system for semantic object labeling with soft-object recognition and dynamic programming segmentation.”, IEEE Transactions on Automation Science and Engineering Journal, Vol. 14, pp. 820-833, 2017.
2. L. Hu and Q. Ni, “Iot-driven automated object detection algorithm for urban surveillance systems in smart cities.”, IEEE Internet of Things Journal , Vol. 5, No. 2, pp. 747-754, 2018.
3. S. Matteoli, M. Diani and G. Corsini, “Automatic target recognition within anomalous regions of interest in hyperspectral images.”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 11, pp. 1056-1069, 2018.
4. 內政部警政署,2012~2015年因開車門釀成肇事統計結果, 2016
5. R. S. Feris, B. Siddiquie, J. Petterson , Y. Zhai , A. Datta , L. Brown and S. Pankanti , “Large-scale vehicle detection, indexing, and search in urban surveillance videos.”, IEEE Trans. on Multimedia, Vol. 14, no. 1, pp. 28-42, 2012.
6. M. ElMikaty and T. Stathaki, “Car detection in aerial images of dense urban areas.”, IEEE Transactions on Aerospace and Electronic Systems Journal, Vol. 54,no.1,pp. 51-63,2018
7. A. Ghasemi and R. Safabakhsh, “A real-time multiple vehicle classification and tracking system with occlusion handling.”, Proc. of IEEE International Conf. on Intelligent Computer Communication and Processing, pp. 109-115, 2012.
8. Y. K. Lai, Y. H. Huang and C. M. Hwang, “Front moving object detection for car collision avoidance applications.”, Consumer Electronics (ICCE), IEEE International Conference, pp. 367-368, 2016.
9. C. Scharfenberger, S. Chakraborty, J. Zelek and D. Clausi “Motion stereo-based collision avoidance for an intelligent smart car door system.”, Intelligent Transportation Systems (ITSC), 15th International IEEE Conference on. IEEE, pp. 1383-1389, 2012.
10. D. S. Pham, O. Arandjelović and S. Venkatesh , “Detection of dynamic background due to swaying movements from motion features.”, IEEE Transactions on Image Processing, Vol. 24, no.1, pp. 332-344, 2015.
11. C. G. del Postigo, J. Torres and J.M. Menéndez, “Vacant parking area estimation through background subtraction and transience map analysis.”, IET Intelligent Transport Systems, Vol. 9, no.9, pp. 835-841, 2015.
12. H. Y. Zhao, J. S. Won and D. J. Kang, “Lane detection and tracking based on annealed particle filter.” ,International Journal of Control Automation and Systems, Vol. 12, no. 6, pp. 1303–1312, 2014.
13. H. W. Yong, D. Meng, W. Zuo and L. Zhang, “Robust online matrix factorization for dynamic background subtraction.”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, no.7, pp. 1726-1740, 2018.
14. J. K. Suhr, H. G. Jung, G. Li and J. Kim “Mixture of gaussians-based background subtraction for bayer-pattern image sequences.”, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 21, no.3 ,pp. 365-370, 2011.
15. L. W. Tsai, J. W. Hsieh and K. C. Fan, ”Vehicle detection using normalized color and edge map.” , IEEE International Image Processing, Vol. 2, pp. 598-601, 2005.
16. C. Kim and J. N. Hwang, “Ject-based video abstraction for video surveillance systems.”, IEEE Transactions on Circuits and Systems for Video Technology, Vol, 12, no.12, pp. 1128-1138, 2002.
17. S. Y. Chien, S. Y. Ma, and L. G. Chen, “Efficient moving object segmentation algorithm using background registration technique.”, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, no.7, pp. 577-586, 2002.
18. D. Y. Chen, G. R. Chen, and Y. W. Wang ,“Real-time dynamic vehicle detection on resource-limited mobile platform.”, IET Computer Vision ,Vol. 7, no.2, pp. 81-89, 2013.
19. K. Kim, T. H. Chalidabhongse, D. Harwood and L. Davis , “Real-time foreground-background segmentation using codebook model.” , Real-Time Imaging ,Vol. 11, no 3, pp. 172-185, 2005.
20. Y. W. Huang, S. Y. Chien, B. Y. Hsieh and L. G. Chen, “Automatic threshold decision of background registration technique for video segmentation.”,Visual Communications and Image Processing, Vol. 4671, pp. 552-564, 2002.
21. P. Spagnolo, M. Leo and A. Distante, “Moving object segmentation by background subtraction and temporal analysis.” Image and Vision Computing, Vol. 24, no 5, pp. 411-423, 2006.
22. M. Heikkila and M. Pietikainen, “A texture-based method for modeling the background and detecting moving objects.”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.28, no.4, pp. 657-662, 2006.
23. N. Singla, “Motion detection based on frame difference method.”, International Journal of Information and Computation Technology, Vol. 4,no.15, pp. 1559-1565, 2014.
24. W. Shuigen , C. Zhen and D. Hua, “Motion detection based on temporal difference method and optical flow field.”, IEEE Electronic Commerce and Security Second International Symposium , Vol. 2, pp. 85-88, 2009.
25. M. Zhu and H. Wang, “Fast detection of moving object based on improved frame-difference method.”, IEEE Computer Science and Network Technology (ICCSNT), pp. 299-303, 2017.
26. P. Zille, T. Corpetti, L. Shao and X. Chen, “Observation model based on scale interactions for optical flow estimation.”, IEEE Transactions on Image Processing, Vol. 23, no.8 , pp. 3281-3293, 2014
27. L. W. Tsai, J. W. Hsieh and K. C. Fan , “Vehicle detection using normalized color and edge map.”, IEEE Image Processing, Vol. 2, pp. 598-601, 2005.
28. W. L. Hsu, H. Y. Liao, B. S. Jeng and K. C. Fan, “Real-time vehicle tracking on highway.”, Proceedings of The IEEE Conference on Intelligent Transportation Systems, Vol. 2, pp. 909-914, 2003.
29. G. L. Foresti, V. Murino and C. Regazzoni, “Vehicle recognition and 86 tracking from road image sequences.”, IEEE Transations on Vehicular Technology, Vol. 48, no.1, pp. 301-318, 1999.
30. K. Yamada and M. Mizuno, “A vehicle parking detection method using
image segmentation.”, Journal of Electronics and Communications in Japan ,Vol. 84, no.10 , pp. 25-34, 2001.
31. K. Ying, A. Ameri, A. Trivedi, D. Ravindra, D. Patel and M. Mozumdar, “Decision tree-based machine learning algorithm for in-node vehicle classification.”, IEEE Green Energy and Systems Conference (IGESC), pp. 71-76, 2015.
32. S. Buschjäger, and K. Morik, “Decision tree and random forest implementations for fast filtering of sensor data.” IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 65, no.1, pp. 209-222, 2018.
33. Z. Yan, Y. Feng, C. Cheng, J. Fu, X. Zhou and J. Yuan, “Extensive exploration of comprehensive vehicle attributes using D-CNN with weighted multi-attribute strategy.”, IET Intelligent Transport Systems, Vol. 12, no.3, pp. 186-193, 2018.
34. J. Chang, L. Wang, G. Meng, S. Xiang and C. Pan, “Vision-based occlusion handling and vehicle classification for traffic surveillance systems.”, IEEE Intelligent Transportation Systems Magazine, Vol. 10 no.2, pp. 80-92, 2018.
35. S. Wang, X. Guo, Y. Tie, L. E. E. Ivan, L. Qi, and L. Guan , “Graph-based safe support vector machine for multiple classes.”, IEEE Access, pp. 28097-28107, 2018.
36. J. J. Li, F. Alzami, Y. J. Gong and Z. Yu “A multi-label learning method using affinity propagation and support vector machine.”, IEEE Access, Vol. 5, pp. 2955-2966, 2017.
37. J. Sun and J. Sun ,“Real-time crash prediction on urban expressways: identification of key variables and a hybrid support vector machine model.”, IET Intelligent Transport Systems, Vol. 10, no.5, pp. 331-337, 2016.
38. O. Barnich and M. Droogenbroeck, “ViBe: a universal background
subtraction algorithm for video sequences.”, IEEE Trans. Image Process, Vol. 20, no. 6, pp. 1709-1724, 2011.
39. Y. Chu, J. Chen and X. Chen, “An improved ViBe background subtractionmethod based on region motion classification.”, Proc. International Society for Optics and Photonics, Vol. 8918,pp. 1-5, 2013.
40. F. Zhu, P. Jiang and Z. Wang , “ViBe: The extension of the universal
background subtraction algorithm for distributed smart camera.”, Proc. Int. Symp. Instrumentation and Measurement, Sensor Network and Automation, pp. 164-168, 2012.
41. V. Vapnik and C. Cortes, “Support-vector networks.” Machine
learning, Vol. 20, no. 3, pp. 273-297, 1995.
42. X. Yang, X. Shen, J. Long, and H. Chen, "An improved median-based Otsu image thresholding algorithm.", Aasri Procedia, Vol. 3, pp. 468-473, 2012.
43. 內政部警政署,2014-2017按各小時劃分道路交通意外統計, 2018
44. Y. Liu , T. Cai and G. Huang, “Extended Discriminant Nearest Feature Line Analysis for Feature Extraction.” Intelligent Information Hiding and Multimedia Signal Processing, pp. 278-281. 2015.
45. D. G. Lowe, “Object recognition from local scale-invariant features” Computer vision international conference, Vol. 2, pp. 1150-1157,1999.
46. Y. Li , C. L. Yang, Xi. Zhang, L. Fan and W. Xie, “A Novel SURF Based on a Unified Model of Appearance and Motion-Variation.” Vol. 6, pp. 31065-31076. 2018.
47. J. Tang , G. Yi and L. Yizhong, “Application of Visual Saliency and Feature Extraction Algorithm applied in Large-scale Image Classification.”International Conference on Communication and Electronics Systems, pp. 1-6, 2016.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top