(3.235.25.169) 您好!臺灣時間:2021/04/20 03:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:劉益銓
研究生(外文):Yi-Quan Liu
論文名稱:應用全景攝影於交通事故現場紀錄系統之研究
論文名稱(外文):A Study on Traffic Accident Scenes Recording System Based on Panoramic Camera
指導教授:戴文凱戴文凱引用關係
指導教授(外文):Wen-Kai Tai
口試委員:范欽雄楊致芳
口試日期:2019-07-26
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:60
中文關鍵詞:全景圖影像處理影像拼接交通事故現場圖繪製
外文關鍵詞:PanoramaImage ProcessingImage StitchingTraffic Accident Scene Drawing
相關次數:
  • 被引用被引用:0
  • 點閱點閱:69
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:16
  • 收藏至我的研究室書目清單書目收藏:0
每天都發生不少交通事故,其中大多數是無人員傷亡 A3 和 A4 車禍!因為發生相當頻繁,警察幾乎每天都需要花相當多的時間在處理這類車禍。若能有一種快速記錄事故現場的方式供警察使用的話,就能為這些警察省下不少時間,有更充分的時間應付其他更緊急的狀況。甚至如果讓當事人、或其他鑒定人員也能夠記錄的話,在警察到現場前就能移走車子,防止交通堵塞,節省大家等待的時間,以及多一個自行保存的證據更能保障自己的權益。

由於現今全景相機的普遍化和輕量化,因此本論文提出一種特別的記錄方式,使用全景攝影來創建交通事故平面圖和全景影像導覽。此系統利用可搭配智慧型手機操作的全景相機來快速產生 360 度環景圖,並藉由 GPS 座標和使用者設定北方方位後,將數張不同位置拍攝的環景圖呈現出像是谷歌街景那樣的全景導覽。接著還可將這些環景圖拼接合成一張事故現場的俯瞰平面圖。然後使用本系統的繪圖功能來在合成出的俯瞰平面圖上做直線距離量測和繪製行車方向之曲線。系統在給予參考比例尺長度後,就能利用平面圖上地面變形小而可以直接進行測量的特點來量測撞擊點與其他位置的距離,最後可將這些繪製在上面的資訊連同平面圖和全景圖導覽都儲存起來以方便日後查看。另外還提供可從全景影片中提取出靜止畫面之全景影像的功能,並提出加快拍攝速度與提升拼接成功率的操作流程規範。

本論文之主要貢獻有:提出一種全新的交通事故記錄方式、利用 GPS 與使用者指定的北方位置之全景圖場景轉換導覽、一種新穎的平面圖產生方式、可在縫合出的平面圖上畫標線與測量距離、支援自動從影片解析全景圖、一套容易成功且快速的系統性操作規範與流程以及啟發式的選擇全景圖拍攝位置的方法。
There are many traffic accidents every year, most of which are only property damaged accidents. Because it happens quite frequently, the police need to spend a considerable amount of time every day dealing with such accidents. At this time, if there is a way to record the scene of an accident quickly for the police, it will save the police a lot of time and let them have more time to deal with other more urgent situations. If the parties or expert witnesses can even use it, the car can be removed before the police arrive at the scene to prevent traffic jams. This saves everyone's waiting time, and we can have more self-preserved evidences to protect our rights.

Due to the universality and lightweight of today's panoramic cameras, this paper proposes a special way of recording scenes, using panoramic photography to create traffic accident plans and panoramic navigation. Our system uses a panoramic camera that can be operated with a smartphone to quickly generate a 360-degree panoramic view. With the GPS coordinates and setting the north orientation, the scenes taken at several different locations are presented as a panoramic tour like Google Street View. Also, it cloud stitch a series of different scenes into an overlooking view of the accident scene. Then we can directly measure the distances between the points of impact and other locations on the plan, and draw some information on it, such as distances and driving directions of cars. After giving the reference scale length, the system can measure the distance between the impact point and other positions by using the feature that the ground deformation on the plan is small and thus can be directly measured. Finally, we can save the floor plan and navigation of panorama for later viewing. We also provide the ability to extract panoramic images of still pictures from panoramic video, and propose an operational process specifications that speeds up shooting and increase success rate of stitching.

The main contributions are proposing a new way of recording traffic accidents, panoramic scenes navigation by using GPS and user-designated northern direction, a novel way of generating the floor plan, drawing lines and measuring distance on the stitched floor plan, supporting automatic resolution of panoramas from a video, a set of easy and fast systematic operational process specifications, and a heuristic way to select shooting positions.
中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V
目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI
圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIII
表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX
演算法目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X
1 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 研究背景與動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 方法概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 研究貢獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 本論文之章節結構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 文獻探討 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 基於視覺的即時定位與地圖構建 . . . . . . . . . . . . . . . . . . . . . 4
2.2 影像縫合 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 研究方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 系統架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 伺服器與客戶端之通訊流程 . . . . . . . . . . . . . . . . . . . 12
3.2 系統功能 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 全景導覽 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 平面圖生成 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.3 影片解析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.4 畫線與量測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 操作流程規範 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.1 操作步驟流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.2 注意要點與規範 . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4 實驗結果與分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.1 平面圖上的量測準確性實驗 . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 影片解析參數影響度實驗 . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 視野大小的選擇性實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 拍照數量與位置選定的關係性實驗 . . . . . . . . . . . . . . . . . . . 36
4.5 操作流程的檢驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5.1 手持傾斜度差異性實驗 . . . . . . . . . . . . . . . . . . . . . . 43
4.5.2 操作流程的時間花費 . . . . . . . . . . . . . . . . . . . . . . . 44
5 結論與未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1 貢獻與結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 限制與未來研究方向 . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
[1] “道路交通事故發生件數.” http://statis.moi.gov.tw/micst/stmain.jsp?
sys=220&ym=10300&ymt=10600&kind=21&type=1&funid=c0690101&cycle=
4&outmode=0&compmode=0&outkind=1&fld1=1&cod00=1&rdm=dNlxfadh.
[Online; accessed 2019-01-16.
[2] “道 路 交 通 事 故 處 理 辦 法.” https://law.moj.gov.tw/LawClass/LawAll.
aspx?PCode=D0080090. [Online; accessed 2019-01-16].
[3] P.-Y. Chan and Y.-C. Chen, “道路交通事故現場管制與跡證蒐集之探討,” in 中華
民國 101 年道路交通Ӽ全與執法研討會, Sept. 2012.
[4] M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant
features,” International Journal of Computer Vision, vol. 74, pp. 59–73, Aug. 2007.

[5] J. B. Kruskal, “On the Shortest Spanning Subtree of a Graph and the Traveling Sales-
man Problem,” in Proceedings of the american mathematical society, 7, 1956.

[6] Wikipedia, “Websocket — Wikipedia, the free encyclopedia,” 2019. [Online; ac-
cessed 2019-07-20].

[7] A. Cherian, V. Morellas, and N. Papanikolopoulos, “Accurate 3d ground plane es-
timation from a single image,” in 2009 IEEE International Conference on Robotics

and Automation, pp. 2243–2249, May 2009.
[8] E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities. San Francisco,
CA, USA: Morgan Kaufmann Publishers Inc., 2004.
[9] R. Nevatia and K. R. Babu, “Linear feature extraction and description,” Computer
Graphics and Image Processing, vol. 13, no. 3, pp. 257–269, 1980.
[10] S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the

bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, vol. PAMI-6, pp. 721–741, Nov. 1984.

[11] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmenta-
tion,” Int. J. Comput. Vision, vol. 59, pp. 167–181, Sept. 2004.

47

[12] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monoc-
ular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, pp. 1255–

1262, Oct. 2017.
[13] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, Mar. 2018.
[14] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to
sift or surf,” in 2011 International Conference on Computer Vision, pp. 2564–2571,
Nov. 2011.
[15] S. Bu, Y. Zhao, G. Wan, and Z. Liu, “Map2dfusion: Real-time incremental uav image
mosaicing based on monocular slam,” in 2016 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), pp. 4564–4571, Oct. 2016.

[16] G. Verhoeven, “Taking computer vision aloft–archaeological three-dimensional re-
constructions from aerial photographs with photoscan,” Archaeological Prospection,

vol. 18, no. 1, pp. 67–73.
[17] J. Vallet, F. Panissod, C. Strecha, and M. Tracol, “Photogrammetric performance

of an ultra light weight swinglet uav,” ISPRS - International Archives of the Pho-
togrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-1/

C22, pp. 253–258, Sept. 2012.
[18] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J.
Comput. Vision, vol. 60, pp. 91–110, Nov. 2004.
[19] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model
fitting with applications to image analysis and automated cartography,” Commun.
ACM, vol. 24, pp. 381–395, June 1981.

[20] B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjust-
ment — a modern synthesis,” in Vision Algorithms: Theory and Practice (B. Triggs,

A. Zisserman, and R. Szeliski, eds.), (Berlin, Heidelberg), pp. 298–372, Springer
Berlin Heidelberg, 2000.
[21] P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image
mosaics,” ACM Trans. Graph., vol. 2, pp. 217–236, Oct. 1983.

48

[22] W. Commons, “File:castle church of lutherstadt wittenberg (interior, full spherical
panoramic image, equirectangular projection).jpg — wikimedia commons, the free
media repository,” 2019. [Online; accessed 2019-10-07].
[23] “Miscellaneous image transformations —opencv 2.4.13.7 documentation.”
https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_
transformations.html?highlight=adaptivethreshold. [Online; accessed
2019-07-23].
[24] “Opencv-python 中 文 教 程 图 像 阀 值.” https://www.kancloud.cn/aollo/
aolloopencv/267591. [Online; accessed 2019-07-21].
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔