跳到主要內容

臺灣博碩士論文加值系統

(18.205.192.201) 您好!臺灣時間:2021/08/05 08:45
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:羅秋馨
論文名稱:無人飛行載具空拍影像目標物件即時自動辨識與定位系統之研究
論文名稱(外文):A Study of Unmanned Aerial Vehicle Real-time Automatic Objects Identification and Positioning System
指導教授:蕭漢威蕭漢威引用關係
指導教授(外文):HSIAO, HAN-WEI
口試委員:陳灯能楊新章
口試委員(外文):CHEN, DENG-NENGYANG, HSIN-CHANG
口試日期:2020-07-27
學位類別:碩士
校院名稱:國立高雄大學
系所名稱:資訊管理學系碩士班
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:58
中文關鍵詞:空拍影像辨識空拍物件定位無人載具深度學習
外文關鍵詞:Aerial Photograph Object DetectionAerial Photograph Object PositioningUnmanned Aerial VehicleDeep Learning
相關次數:
  • 被引用被引用:2
  • 點閱點閱:95
  • 評分評分:
  • 下載下載:31
  • 收藏至我的研究室書目清單書目收藏:0
近年來無人飛行載具核心技術快速進步且成本低廉,民用方面無人飛行載具搭載攝影機在環境監測、航空測量等用途被廣泛運用。無人飛行載具因其良好的機動性與視野可克服陸上的障礙物快速搜尋目標,在現階段一般操控上主要是以手動模式操作,僅能在地面人員視線範圍內飛行或透過影像遠端遙控。進階搭載全球衛星導航系統後操作模式有所轉變,無人飛行載具可以設定飛行路徑,使其自動航行,擴展更多應用方式,例如大範圍區域自動監控環境與搜尋目標物,更可利用自身的全球衛星導航系統座標定位目標物,因此搭載攝影裝置的無人飛行載具也可被廣泛使用在救災與危險場域的探勘上。
在本研究中提出了可即時自動辨識搜索與定位目標物的架構,利用深度學習技術建立辨識特定類別目標物的模型,以此模型搭載在無人飛行載具上。在飛行的過程中,無人飛行載具的機載運算單元藉由物件辨識模型快速在即時影像中辨識出目標物,並根據無人飛行載具的飛行姿態估算目標物的全球衛星導航系統座標,綜合上述規劃出可自動辨識與定位目標物的系統,使無人飛行載具可在巡航的同時回傳目標物即時座標,幫助地面人員快速判斷當前情況,在更短時間內做出決策及行動。
Because of the improvement of core technologies and reduced cost, civil unmanned aerial vehicles (UAVs) with cameras have been utilized in different fields such as environment monitoring and aerial surveys. The great view and flexibility make UAVs not obstruct by obstacles on the ground, therefore, they can find targets in a short time. The typical method of operating a UAV is using the manual mode that requires it in the ground operator's view or remoting using video stream. An advanced method is letting the UAV automatically fly on a preset path using GNSS. This method allows various applications such as target searching and environment monitoring of a wider area. Furthermore, combining the automatic searching and target positioning features, UAVs could be used to disaster relief and prospecting.
We proposed a real-time object identification and positioning system which contained an object identification training process which trained an object identification model by Deep Learning to find the specific class of objects in a frame of a real-time video, and an object positioning process which calculated the relative position of the target with the location of the object in the frame and flight attitude to get the GNSS coordinate of the target immediately. With the object identification model and target positioning feature, UAVs can help ground crews make decisions in a short time by returning real-time coordinates of the target.
目錄 v
表目錄 vi
圖目錄 vii
第一章 緒論 1
第一節 研究背景 1
第二節 研究動機 2
第三節 研究目的 4
第二章 文獻探討 5
第一節 無人飛行載具與電腦視覺之相關研究 5
第二節 物件辨識的相關研究 6
第三節 攝影測量的相關研究 10
第三章 研究方法 12
第一節 流程架構 12
第二節 物件偵測訓練流程 14
第三節 物件定位流程 18
第四章 實驗與評估 24
第一節 實驗環境與實驗方式 24
第二節 實驗結果與評估 33
第五章 結論與未來方向 45
參考文獻 48

[1]Fortune Business Insights, “Unmanned Aerial Vehicle (UAV) Market to Reach USD 27.40 Billion by 2026; Increasing Demand from Defense Forces to Boost Growth,” GlobeNewswire News Room, 2019. [Online]. Available: https://www.globenewswire.com/news-release/2019/11/11/1944568/0/en/Unmanned-Aerial-Vehicle-UAV-Market-to-Reach-USD-27-40-Billion-by-2026-Increasing-Demand-from-Defense-Forces-to-Boost-Growth-Fortune-Business-Insights.html.
[2]Morder Intelligence, “Unmanned Aerial Vehicles Market - Growth, Trends, and Forecast (2019 - 2024),” Morder Intelligence, 2018. [Online]. Available: https://www.mordorintelligence.com/industry-reports/uav-market
[3]Morder Intelligence, “Small UAV Market, Growth, Trends, and Forecast (2019 - 2024),” Morder Intelligence, 2018. [Online]. Available: https://www.mordorintelligence.com/industry-reports/small-uav-market.
[4]臺北市政府工務局, 「臺北市政府工務局101年1月份施政報告,」 臺北市, 2013.
[5]臺北市政府工務局, 「工務局1月份施政報告,」 臺北市, 2017.
[6]S.Li, L. R.Margolies, J. H.Rothstein, E.Fluder, R.McBride, and W.Sieh, “Deep Learning to Improve Breast Cancer Detection on Screening Mammography,” Scientific Reports,vol. 9, 12495, 2019.
[7]S.Sharma, “How Artificial Intelligence is Revolutionizing Food Processing Business?,” Medium, 2019.
[8]S.Cocking, “Business Showcase : Agrieye,” 2017. [Online]. Available: https://irishtechnews.ie/business-showcase-agrieye/.
[9]B.Kellenberger, D.Marcos, and D.Tuia, “Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning,” Remote Sensing of Environment, vol. 216, pp. 139-153, 2018.
[10]內政部國土測繪中心, 「100至103年度測繪科技發展計畫-發展無人飛行載具航拍技術作業,」 [Online]. Available: https://www.nlsc.gov.tw/UAS/index.html.
[11]N.Madigan, “Need a Quick Inspection of a 58-Story Tower? Send a Drone,” The New York Times, 2018.
[12]M. A. R.Estrada and A.Ndoma, “The uses of unmanned aerial vehicles -UAV’s- (or drones) in social logistic: Natural disasters response and humanitarian relief aid,” in Procedia Computer Science, vol. 149, pp. 375-383, 2019.
[13]I.Goodfellow, Y.Bengio, and Aaron Courville, Deep Learning. MIT Press, 2016.
[14]Y.LeCun, L.Bottou, Y.Bengio, and P.Haffner, “Gradient-based learning applied to document recognition,” Proceedings of IEEE, vol. 86, no.11, 1998.
[15]A.Krizhevsky, I.Sutskever, and G. E.Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.
[16]R.Girshick, J.Donahue, T.Darrell, and J.Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 510-587, 2014.
[17]R.Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1440-1448, 2015.
[18]J.Redmon, S.Divvala, R.Girshick, and A.Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
[19]B.Wu, F.Iandola, P. H.Jin, and K.Keutzer, “SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 129-137, 2017.
[20]S.-J.Hong, Y.Han, S.-Y.Kim, A.-Y.Lee, and G.Kim, “Application of Deep-Learning Methods to BirdDetection Using Unmanned Aerial Vehicle Imagery,” Sensors, vol. 2019, no. 19, pp. 1–16, 2019.
[21]J.Lu, C.Ma, L.Li, Y.Zhang, Z.Wang, and J.Xu, “A Vehicle Detection Method for Aerial Image Based on YOLO,” Journal of Computer and Communication, vol. 6, no. 11, pp. 98–107, 2018.
[22]Dequn Zhao and Xinmeng Li, “Ocean ship detection and recognition algorithm based on aerial image,” 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), pp. 218–222, 2020.
[23]I. P.Howard and B. J.Rogers, Binocular Vision and Stereopsis. 2008.
[24]D.Murray and J. J.Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots, vol. 8, pp. 161-171, 2000.
[25]M.Bertozzi, A.Broggi, A.Fascioli, and S.Nichele, “Stereo Vision-based Vehicle Detection,” in IEEE Intelligent Vehicles Symposium 2000, pp. 39-44, 2000.
[26]T.Luhtmann, S.Robson, S.Kyle, and I.Harley, Close Range Photogrammetry: Principles, techniques and applications, 2011.
[27]謝幸宜、邱式鴻, 「以自率光束法提升四旋翼UAV 航拍影像之空三平差精度,」 航測及遙測學刊 (Journal Photogramm. Remote Sensing), vol. 16, no. 4, pp. 245–260, 2013.
[28]GoPro, “HERO8 Black - Digital Lenses (formerly known as FOV),” 2019. [Online]. Available: https://community.gopro.com/t5/en/HERO8-Black-Digital-Lenses-formerly-known-as-FOV/ta-p/398868?fbclid=IwAR2aR3hpOlMOP2H-G-QWIHwyQMoC2hLnI-s0pKMe3oKLdEj_5JrWhWCiySA#.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top