(3.235.25.169) 您好!臺灣時間:2021/04/20 18:15
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:鄭朝安
研究生(外文):Chao-An Jhong
論文名稱:車用環景監視系統的實現
論文名稱(外文):Implementation of Vehicle Around View Monitoring (AVM) System
指導教授:廖珗洲廖珗洲引用關係
指導教授(外文):Hsien-Chou Liao
口試委員:楊朝成黃永發廖珗洲
口試委員(外文):Chou-Chen YaugYung-Fa HuangHsien-Chou Liao
口試日期:2014-06-11
學位類別:碩士
校院名稱:朝陽科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:中文
論文頁數:45
中文關鍵詞:影像拼接影像融合環景監視系統鳥瞰圖
外文關鍵詞:image stitchingimage fusionaround view monitoring systembird eye view
相關次數:
  • 被引用被引用:2
  • 點閱點閱:483
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:77
  • 收藏至我的研究室書目清單書目收藏:0
目前車用攝影機已經非常普及了,而車用環景監視(AVM)系統則是重要的應用之一,透過全景鳥瞰影像的建立,來解決車輛四周盲點區域問題。有別於先前研究所實現的AVM系統在相鄰攝影機影像的接縫處產生影像不連續的問題,本研究實現一個AVM系統來克服上述問題。實現的方法上首先針對不同攝影機的影像進行Harris角點偵測(Harris corner detection),接著使用最大關聯規則(maximum correlation rule)和隨機抽樣一致演算法(RANSAC)來計算出兩張影像中特徵點的關聯性,然後再計算單應性矩陣(Homography Matrix),如此可以將多張影像拼接為環景影像,最後再將之轉換成鳥瞰圖。透過實驗分析來針對不同解析度的影像進行上述流程可以達到最高之FPS (frame per second)的統計,結果顯示FPS約為2.4到11.7,未來可以透過硬體來實現即時的AVM系統。
Cameras are popularly used in the vehicles. Around view monitoring (AVM) system is one of the emergent applications. The around view of a vehicle can solve the blind spot problem. For the previous AVM systems, there exists a clear boundary between the images of two adjacent cameras. Therefore, an implementation of the AVM system is proposed to overcome the above problem. Firstly, the feature points of all the images are detected by using the Harris corner detection method. Then, the maximum correlation rule and RANdom SAmple Consensus (RANSAC) method is used to establish the correlation between two sets of feature points. A homography matrix is then established. The matrix is used to stitch two or more images together into a single, large image. Then, the single image is transformed into a bird-eye view. Various image resolutions are used in the experimental study, the maximum FPS (frame per second) is about 2.4 to 11.7. A real-time AVM system with 30 FPS can be realized in hardware in the future.
中文摘要 I
Abstract II
誌謝 III
表目錄 VI
圖目錄 VII
第一章 簡介 1
第二章 文獻探討 3
2.1影像拼接 3
2.2鳥瞰圖轉換 6
2.3環監視系統 7
第三章 系統設計與實現 12
3.1特徴點分析 13
3.1.1 Harris角點偵測 15
3.2特徴匹配 18
3.2.1 最大關聯規則 18
3.2.2 隨機抽樣一致演算法 19
3.3 影像轉換 21
3.4 拼接處理 25
3.5 轉換為鳥瞰圖 28
第四章 實驗分析 29
4.1實驗場景與攝影機架設 29
4.2雛型系統設計 30
4.3實驗結果 33
第五章 結論與未來展望 41
參考文獻 43

表目錄
表1:樣本1不同解析度和攝影機數量的FPS比較 38

圖目錄
圖1:Google公司所設計的個種街景車 4
圖2:Google的街景拼接圖 4
圖3:AutoStitch影像接合技術 5
圖4:D. Sim and Y. Kim的圓柱投影全景圖 6
圖5:車型機器人四周鳥瞰圖 7
圖6:H. G. Jung等人系統的鳥瞰影像 7
圖7:單一環場攝影機所建立的鳥瞰影像 8
圖8:T. Ehlgen和T. Pajdla的系統的鳥瞰影像 9
圖9:T. Ehlgen和T. Pajdla的系統的影像失真 10
圖10:富士通環景影像系統 11
圖11:五種角點偵測方法的結果 14
圖12:M相關性矩陣角點、邊緣及平滑的分類圖 17
圖13:來源圖經由Harris的結果 18
圖14:最大關聯規則處理後的結果 19
圖15:RANSAC示意圖 20
圖16:RANSAC處理後所得到的結果 21
圖17:圖像轉換 22
圖18:初步拼接結果 24
圖19:初步拼接誤差 25
圖20:圓柱座標的轉換 27
圖21:最終拼接結果 28
圖22:實驗架設和場景 30
圖23:雛型系統介面 31
圖24:執行緒輸出順序示意圖 32
圖25:改良後的執行緒輸出順序示意圖 32
圖26:樣本1、2的全景圖和鳥瞰圖 33
圖27:樣本2全景圖連續影像 35
圖28:樣本2全景鳥瞰圖連續影像 35
圖29:C. WANG等人的自動停車系統之鳥瞰圖 36
圖30:連續影像所產生的誤差 37
圖31:不同解析度之折線圖 39
圖32:雛型系統執行緒的時間比較 40


[1]V. Morellas, P. Tsiamyrtzis, and S. Harp, “Urban surveillance systems: from the laboratory to the commercial world,” Proceedings of IEEE, Vol. 89, lssue 10, pp. 1478-1497, 2001.
[2]C. S. Regazzoni and G. L. Foresti, “Video processing and communications in real-time surveillance,” Real-Time Imaging 2001, pp. 381-388, 2001.
[3]C. Regazzoni, V. Ramesh, and G. L. Foresti, “Special issue on video communications, processing, and understanding for third generation surveillance systems,” Proceedings of IEEE, Vol. 89, lssue 10, pp. 1355-1367, 2001.
[4]B. Rieger and H. Rode, “Digital image recording for court-related purposes,” Proceedings of IEEE 33rd Annual 1999 International Carnahan Conference on, pp. 262-279, 1999.
[5]T. Boult, R. J. Micheals, X. Gao, and M. Eckmann, “Into the woods: visual surveillance of noncooperative and camouflaged targets in complex outdoor setting,” Proceedings of IEEE, Vol. 89, lssue 10, pp. 1382-1402, 2001.
[6]N. Qin, D. Song, and K. Goldberg, “Aligning windows of live video from an imprecise pan-tilt-zoom robotic camera into a remote panoramic display,” Proceedings of IEEE International Conference on Robotics and Automation(ICRA 2006), pp. 3429-3436, 2006.
[7]C. Wang, H. Zhang, M. Yang, X. Wang, L. Ye, and C. Guo, “Automatic Parking Based on Bird`s-eye View Image,” hindawi publishing corporation advances in mechanical engineering(2014), Article ID 847406, 2014.
[8]K. Mikolajczyk and C. Schmid , “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, lssue 10, pp. 1615-1630, Oct. 2005.
[9]M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography,” Communications of the ACM, Vol. 24, No. 6 , pp. 381-395, 1981.
[10]E. Vincent and R. Laganiere, “Detecting planar homographies inan image pair,” Proc. of 2nd International Symposium on Image and Signal Processing and Analysis, Pula, Croatia, pp. 182-187, June 2001.
[11]S. Hu, Y. Hu, Z. Chen and P. Jiang, “Feature-Based image automatic mosaicing algorithm,” Proceedings of the 6th World Congress on Intelligent Controland Automation, pp. 10361-10364, 2006.
[12]D. Sim and Y. Kim, “Detection and compression of moving objects based on new panoramic image modeling,” Image and Vision Computing, Vol. 27, lssue 10, pp. 1527-1539, 2009.
[13]J. M. Collado, C. Hilario, A. d. l. Escalera, and J. M. Armingol, “Adaptative road lanes detection and classification,” Springer-Verlag Berlin Heidelberg 2006, pp. 1151-1162, 2006.
[14]R. B. Yadav, N. K. Nishchal, A. K. Gupta, and V. R. Rastogi, ”Retrieval and classification of shape-based objects using Fourier, generic Fourier, and wavelet-Fourier descriptors technique: A comparative study,” Optics and Lasers in Engineering, Vol. 45, lssue 6, pp. 695-708, 2007.
[15]L. Marinez-Fonte, S. Gautama, and W. Philips, “An empirical study on corner detection to extract building in very high resolution satellite images,” Processdings of ProRisc, pp. 288-293, 2004.
[16]J. P. Eakins, K. J. Riley, and J. D. Edwards, “Shape Feature Matching for Trademark Image Retrieval,” Image and Video Retrieval, Second International Conference, CIVR 2003, Urbana-Champaign, IL, USA, July 24-25, pp. 28-38, 2003.
[17]M. A. Fisclet, R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image and analysis and automated cartography,” Graphics and image Processing, Vol. 24, Issue 6, pp. 381-395, 1981.
[18]http://en.wikipedia.org/wiki/RANSAC
[19]R. Sukthankar, R. G. Stockton, and M. D. Mullin, “Automatic keystone Correction for camera-assisted Presentation interface”, Proceedings of International Conference on Multimedia Interfaces, pp. 601-714, 2000.
[20]http://maps.google.com.tw/intl/zh-TW/maps/about/behind-the-scenes/streetview/
[21]M. Brown and G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision. Vol. 74, Issue 1, pp. 59-73, 2007.
[22]H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, “Parking slot markings recognition for automatic parking assist system,” IEEE Intelligent Vehicle Symposium, Tokyo, Japan, Jun. 13-15, pp. 106-113, 2006.
[23]T. Gandhi, M. M. Trivedi, “Motion based vehicle surround analysis using an Omni-Directional camera,” IEEE Intelligent Vehicles Symposium, June 14-17, pp. 560-565, 2004.
[24]T. Ehlgen and T. Pajdla, “Monitoring surrounding areas of truck-trailer combinations,” in Proc. of 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar. 21-24, 2007.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔