跳到主要內容

臺灣博碩士論文加值系統

(44.200.168.16) 您好!臺灣時間:2023/03/31 17:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:張舒婷
研究生(外文):Shu-Ting Chang
論文名稱:基於影像特徵的魚眼鏡頭上車輛追蹤方法
論文名稱(外文):A Vehicle Tracking Method on Fisheye Lens Based on Image Features
指導教授:葉奕成葉奕成引用關係
指導教授(外文):I-Cheng Yeh
口試委員:施皇嘉黃仲誼林士勛
口試委員(外文):Huang-Chia ShihZhong-Yi HuangShih-Syun Lin
口試日期:2019-1-25
學位類別:碩士
校院名稱:元智大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:40
中文關鍵詞:魚眼鏡頭影像特徵車輛追蹤
外文關鍵詞:Fisheye LensImage FeaturesVehicle Tracking
相關次數:
  • 被引用被引用:0
  • 點閱點閱:346
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
現有許多政府單位都透過在攝影機畫面中辨識、追蹤和統計分析車流資訊來進行各種智慧交通運用,如即時流量分析、號誌規劃、疏運路線設計。然而在現有的交通監測工具當中,一般攝影機在觀看的角度上通常無法滿足我們的需求,如果要做一個大型路口的車流分析,得架設多部攝影機,多畫面的整合則會是另一個難題。為此,許多單位改為使用魚眼鏡頭,它所能拍攝到的範圍更加廣泛,其角度可達180度。然而,魚眼鏡頭所拍攝出來的影像會在畫面中心景物形狀正常,但在邊角處影像會根據位置而呈現不定程度的扭曲變形,這會使得我們在影像的觀看或是使用上有不好的影響。傳統方法我們會對魚眼鏡頭使用校正,但是校正的效果有限,圖像校正後內容通常會被裁切或是邊緣扭曲更加嚴重,又或是攝影機數量太大、種類太多都會導致難以一一校正。這裡我們提出利用機器學習的概念,蒐集大量各式魚眼鏡頭底下的車輛資料,並且重新訓練知名的物件辨識網絡YOLO來建立新的辨識器,讓電腦能夠直接的辨識魚眼鏡頭下各種已經變形的車輛。

在追蹤方面,我們延伸即時行人追蹤系統Deep SORT來做車輛追蹤,由於Deep SORT最初的用途是用來追蹤行人,其演算法所建立的網絡模型是基於行人的影像資料集進行訓練,我們將從YOLO辨識完的車輛以人工標記不同幀上相同的車輛,建立一個在魚眼攝影機上車輛Re-ID(Re-Identification)資料集來重新訓練Deep SORT提出的影像特徵提取網絡,最後使用Deep SORT的POI外觀特徵比較方法來計算每張圖像上車輛特徵向量之間的餘弦相似度(cosine similarity)來完成車輛的匹配,藉此改善Deep SORT對車輛追蹤的效果並達到在魚眼攝影機上追蹤車輛運動的最終目的。
Our government has the advanced technology to identifying, tracking and statistically analyzing traffic information in general.However, among the existing traffic monitoring tools, the general camera usually cannot meet our needs in terms of viewing. If you want to analyze the traffic flow at a large intersection, you need to set up multiple cameras. Multi-screen integration will be another problem. To this end, many units have switched to fisheye lenses, which can be an addiction to collect all and correction to meet our need.

However the benefit still limited.After the image is corrected, the content will usually be cut or the edge distortion will be more serious,or the number of cameras is too large, too many types will make it difficult to correct one by one.Here we propose to use the concept of machine learning, collect a large number of vehicle data under the various fisheye lens, and retrain the well-known object identification network YOLO to create a new identifier, allowing the computer to directly identify the various deformations vehiclesunderthe fisheye lens.

In terms of tracking, we extended the real-time pedestrian tracking system Deep SORT for vehicle tracking. Since the original purpose of Deep SORT is to track pedestrians, the network model established by the algorithm is based on the pedestrian image data set. We will get the vehicles identified by YOLO manually mark the same vehicle on different frames, and establish a Re-Identification data set on the fisheye camera to retrain the image feature extraction network proposed in Deep SORT. Finally, Deep SORT's POI appearance feature comparison method is used to achieve the matching of the vehicle by calculating the cosine similarity between the vehicle feature vectors on each image, thereby improving the effect of Deep SORT on vehicle tracking and achieving Track the final goal of vehicle motion on a fisheye camera.
摘要 iii
ABSTRACT iv
誌謝 v
目錄 vi
表目錄 viii
圖目錄 ix
第一章、 緒論 1
1.1 研究背景與動機 1
1.2 章節概要 2
第二章、 文獻研究 3
2.1 物件辨識 3
2.2 物件追蹤 7
2.3 車輛檢測追蹤應用 8
第三章、 研究方法 11
3.1 系統架構 11
3.2 訓練資料建置 11
3.3 車輛追蹤系統 22
3.4 成果驗證 24
第四章、 實驗結果 29
第五章、 結論 37
參考文獻 38
[1] Jintao Li ; Jin-Hui Tang ; Liu Wu ; Ri-Chang Hong ; Sheng Tang ; Yong-dong Zhang . “Accurate estimation of human body orientation from RGB-D sensors.”IEEE Transactions on Cybernetics, 43(5), 1442-1452, 2013.

[2] Hai Yang; Li-Cia Capra ; Ouri Wolfson ; Yu Zheng.“Urban computing: concepts, methodologies, and applications.”ACM Transactions on Intelligent Systems and Technology (TIST), 5(3), 38, 2014.

[3] Cheng Chen ; Fei-Yue Wang ; Jun-Ping Zhang ; Kun-Feng Wang ; Wei-Hua Lin ; Xin Xu.“Data-driven intelligent transportation systems: A survey.” IEEE Transactions on Intelligent Transportation Systems, 12(4), 1624-1639, 2011.

[4] Ali Farhadi ; Joseph Redmon ; Ross Girshick ; Santosh Divvala.“You Only Look Once: Unified, Real-Time Object Detection.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

[5] Alex Bewley ; Dietrich Paulus ; Nicolai Wojke.“Simple Online and Realtime Tracking with a Deep Association Metric.”In Image Processing (ICIP), IEEE International Conference on (pp. 3645-3649), 2017.

[6] Alex Bewley ; Ben Upcroft ; Fabio Ramos ; Geongyuan Ge ; Lionel Ott.“Simple Online and Realtime Tracking.”In Image Processing (ICIP), IEEE International Conference on (pp. 3464-3468), 2016.

[7] Evan Shelhamer ; Jonathan Long ; Trevor Darrell.“Fully convolutional networks for semantic segmentation.”In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440), 2015.

[8] Han Bohyung ; Hong Seunghoon ; Noh Hyeonwoo.“Learning deconvolution network for semantic segmentation.” In Proceedings of the IEEE international conference on computer vision (pp. 1520-1528), 2015.

[9] Jeff Donahue ; Jitendra Malik ; Trevor Darrell ; Ross Girshick.“Rich feature hierarchies for accurate object detection and semantic segmentation.”In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587), 2014.

[10] Ross Girshick.“Fast R-CNN.” In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448), 2015.

[11] Kaiming He ; Jian Sun ; Ross Girshick ; Shaoqing Ren.“Faster R-CNN: Towards real-time object detection with region proposal networks.”In Advances in neural information processing systems (pp. 91-99), 2015.

[12] Georgia Gkioxari ; Kaiming He ; PiotrDoll#westeur034#r ; Ross Girshick.“Mask R-CNN.” In Computer Vision (ICCV), IEEE International Conference (pp. 2980-2988), 2017.

[13] Piotr Doll#westeur034#r ; Pietro Perona ; Ron Appel ; Serge Belongie. “Fast feature pyramids
for object detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8), 1532-1545, 2014.

[14] Rudolf E Kalman.“A new approach to linear filtering and prediction problems.” Journal of basic Engineering, 82(1), 35-45, 1960.

[15] Kuhn Harold W.“The Hungarian method for the assignment problem.” Naval research logistics quarterly 2.1‐2: 83-97, 1955.

[16] Fengwei Yu ; Wenbo Li ; Quanquan Li ; Yu Liu ; Xiaohua Shi ; Junjie Yan. “POI:
Multiple Object Tracking with High Performance Detection and Appearance Feature.” In European Conference on Computer Vision (pp. 36-42). Springer, Cham, 2016.

[17] Andrew Rabinovich ; Christian Szegedy ; Dragomir Anguelov ; Dumitru Erhan ;
Pierre Sermanet ; Scott Reed ; Vincent Vanhoucke ; Wei Liu ; Yangqing Jia.“Going deeper with convolutions.” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9), 2015.

[18] Ying Wang.“A Novel Vehicle Tracking Algorithm Using Video Image Processing.” In 2018 International Conference on Virtual Reality and Intelligent Systems (ICVRIS) (pp. 5-8), 2018.

[19] Guangtao Cheng ; Xue Chen.“A Vehicle Detection Approach Based on Multi-
Features Fusion in the Fisheye Images.” In Computer Research and Development
(ICCRD), IEEE International Conference on (Vol. 4, pp. 1-5), 2011.

[20] Hairong Qi ; Jeff Price ; Tim Gee ; Wei Wang.“Real-Time Multi-Vehicle Tracking
and Counting at Intersections from a Fisheye Camera.” In Applications of Computer Vision (WACV), IEEE Winter Conference on (pp. 17-24), 2016.

[21] Chi Su ; Jingdong Wang ; Liang Zheng ; Shengjin Wang ; Qi Tian ; Yifan Sun ; Zhi Bie.“Mars: A video benchmark for large-scale person re-identification.” In European Conference on Computer Vision (pp. 868-884) Springer, Cham, 2016.

[22] Anton Milan ; Ian Reid ; Konrad Schindler ; Laura Leal-Taixe ; Stefan Roth. “MOT16: A benchmark for multi-object tracking.” arXiv preprint:1603.00831, 2016.

[23] Anton Milan ; Ian Reid ; Konrad Schindler ; Laura Leal-Taix#westeur042# ; Stefan Roth.“MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking.”   
arXiv preprint arXiv:1504.01942, 2015.

[24] Andreas Ess ; Bastian Leibe ; Luc Van Gool. “ Depth and Appearance for Mobile
Scene Analysis.” In Computer Vision. IEEE 11th International Conference on (pp.1-8), 2007.

[25] Jingdong Wang ; Liyue Shen ; Lu Tian ; Liang Zheng ; Shengjin Wang ; Qi Tian.
“Scalable person re-identification: A benchmark.” In Proceedings of the IEEE International Conference on Computer Vision (pp. 1116-1124), 2015.

[26] Huiyuan Fu ; Huadong Ma ; Wu Liu ; Xinchen Liu.“Large-scale vehicle re-identification in urban surveillance videos.”In Multimedia and Expo (ICME), IEEE International Conference on (pp. 1-6), 2016.

[27] Huadong Ma;Tao Mei;Wu Liu;Xinchen Liu.“A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance.”In European Conference on Computer Vision (pp. 869-884). Springer, Cham, 2016.
電子全文 電子全文(全文開放日期20240703,本篇電子全文限研究生所屬學校校內系統及IP範圍內開放)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top