跳到主要內容

臺灣博碩士論文加值系統

(44.201.97.138) 您好!臺灣時間:2024/09/08 06:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:蘇渝閔
研究生(外文):SU, YU-MIN
論文名稱:無人機視覺定位方法
論文名稱(外文):Vision-Based Localization for UAV
指導教授:張哲誠張哲誠引用關係
指導教授(外文):Chang, Che-Cheng
口試委員:林哲維蔡汯嶧
口試委員(外文):Lin, Jhe-WeiTsai, Hung-Yih
口試日期:2024-06-25
學位類別:碩士
校院名稱:逢甲大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:58
中文關鍵詞:機器學習人工智慧無人機定位機器視覺物聯網
外文關鍵詞:Machine LearningArtificial IntelligenceUnmanned Aerial Vehicle PositioningMachine VisionInternet of Things
相關次數:
  • 被引用被引用:0
  • 點閱點閱:17
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
無人機定位系統(Unmanned Aerial Vehicle Positioning System)是無人機的基本功能之一。基於全球衛星系統(Global Navigation Satellite System, GNSS)的定位方式,由於其對外部訊號的高要求,在部分情況中可能會有不準確或失效的情形,因此本篇論文提出以相機產生的圖像進行無人機視覺定位。
本篇論文提出以分類的方式進行無人機定位。由於無人機的算力匱乏,我們選擇根據輕量化的模型MobileNet V2進行優化,本篇論文加入額外的1*1卷積核以及調整激勵函數與批量正規化。這樣的改動能讓模型有更快的收斂速度、更高的準確性,以及更穩定的提高準確度,並且在模擬較差的相機產生的雜訊時,模型也具有更好的強健性。在多種模擬情形中準確率都有所提高,模擬的最嚴苛場景中有著11%的提升。

Unmanned Aerial Vehicle (UAV) Positioning System is one of the basic functions of UAVs. Due to the high requirements for external signals, positioning methods based on the Global Navigation Satellite System (GNSS)may sometimes be inaccurate or fail. Therefore, this paper proposes using images generated by cameras for UAV visual positioning.
This paper suggests a classification-based approach for UAV positioning. Given the limited computing power of UAVs, we optimized our approach based on the lightweight MobileNet V2 model. This paper incorporates additional 1x1 convolutional kernels and adjusts activation functions and batch normalization. These modifications enable the model to converge faster, achieve higher accuracy, and improve stability in maintaining accuracy, particularly in simulations of poor camera-generated noise, where our model also shows superior robustness. In various simulated scenarios, accuracy has improved, with the most stringent scenario showing an 11% increase.

誌謝 i
摘要 ii
Abstract iii
目錄 iv
圖目錄 vi
表目錄 viii
第1章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 研究目的 3
1.4 研究架構 3
第2章 文獻探討 4
2.1 座標系統 4
2.2 地理資訊系統 7
2.3 RESNET 8
2.3.1 殘差學習 9
2.3.2 RESNET的基本架構 10
2.4 MOBILENET 11
2.4.1 MOBILENET V1 12
2.4.2 計算複雜度 13
2.4.3 MOBILENET V2 15
第3章 實驗方法 17
3.1 環境介紹 17
3.1.1 硬體資訊與套件版本 17
3.1.2 QGIS 18
3.2 資料集 20
3.2.1 原始資料集 21
3.2.2 訓練集 TRAINING SET 24
3.2.3 測試集 TESTING SET 25
3.3 模型架構 27
第4章 實驗結果 32
4.1 比較不同的激勵函數 32
4.2 訓練模型 38
4.3 與MOBILENET V2比較 40
4.3.1 30EPOCH 42
4.3.2 BEST MODEL比較 47
4.4 運行效率 52
第5章 結論 54
5.1 主要發現與貢獻 54
5.2 未來研究方向 54
5.3 結論 55
參考文獻 56



[1]F. Ahmed, J. C. Mohanta, A. Keshari, and P. S. Yadav, “Recent Advances in Unmanned Aerial Vehicles: A Review,” Arab J Sci Eng, vol. 47, no. 7, pp. 7963–7984, Jul. 2022, doi: 10.1007/s13369-022-06738-0.
[2]A. Gohari, A. B. Ahmad, R. B. A. Rahim, A. S. M. Supa’at, S. Abd Razak, and M. S. M. Gismalla, “Involvement of Surveillance Drones in Smart Cities: A Systematic Review,” IEEE Access, vol. 10, pp. 56611–56628, 2022, doi: 10.1109/ACCESS.2022.3177904.
[3]“Turkey-Syria earthquake: How drones and a NASA heartbeat detector could transform disaster search,” Sky News. Accessed: Mar. 21, 2024. [Online]. Available: https://news.sky.com/story/turkey-syria-earthquake-how-technology-can-be-used-to-help-respond-to-natural-disasters-12805170
[4]“Drones | Free Full-Text | Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges.” Accessed: Mar. 21, 2024. [Online]. Available: https://www.mdpi.com/2504-446X/7/2/89
[5]“WGS 84 - WGS84 - World Geodetic System 1984, used in GPS - EPSG:4326.” Accessed: Mar. 21, 2024. [Online]. Available: https://epsg.io/4326
[6]W. Bajjali, “Coordinate Systems and Projections,” in ArcGIS for Environmental and Water Issues, W. Bajjali, Ed., Cham: Springer International Publishing, 2018, pp. 67–87. doi: 10.1007/978-3-319-61158-7_5.
[7]K. T. GmbH (https://www.klokantech.com/), “TWD97 / TM2 zone 121 - EPSG:3826.” Accessed: Mar. 21, 2024. [Online]. Available: https://epsg.io
[8]GISGeography, “Cylindrical Projections in Cartography & Maps,” GIS Geography. Accessed: Mar. 21, 2024. [Online]. Available: https://gisgeography.com/cylindrical-projection/
[9]“Map projection,” Wikipedia. Jan. 11, 2024. Accessed: Mar. 21, 2024. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Map_projection&oldid=1194948906
[10]“Transverse Mercator projection,” Wikipedia. Nov. 07, 2023. Accessed: Mar. 21, 2024. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Transverse_Mercator_projection&oldid=1183922600
[11]“Geographic information system,” Wikipedia. Mar. 16, 2024. Accessed: Mar. 21, 2024. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Geographic_information_system&oldid=1214001802
[12]“GIS (Geographic Information System).” Accessed: Mar. 21, 2024. [Online]. Available: https://education.nationalgeographic.org/resource/geographic-information-system-gis
[13]S. C. instituée pour recueillir les faits relatifs à l’invasion et aux effets du choléra dans le département de la S. A. du texte, Rapport sur la marche et les effets du choléra-morbus dans Paris et les communes rurales du département de la Seine : année 1832 ([Reprod.]) / par la Commission nommée, avec l’approbation de M. le ministre du Commerce et des travaux publics, par MM. les préfets de la Seine et de police. 1834. Accessed: Mar. 21, 2024. [Online]. Available: https://gallica.bnf.fr/ark:/12148/bpt6k842918
[14]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
[15]B. C. S. Loh and P. H. H. Then, “Deep learning for cardiac computer-aided diagnosis: benefits, issues & solutions,” Mhealth, vol. 3, p. 45, Oct. 2017, doi: 10.21037/mhealth.2017.09.01.
[16]N. Razfar, J. True, R. Bassiouny, V. Venkatesh, and R. Kashef, “Weed detection in soybean crops using custom lightweight deep learning models,” Journal of Agriculture and Food Research, vol. 8, p. 100308, Jun. 2022, doi: 10.1016/j.jafr.2022.100308.
[17]D. Rathi, S. Jain, and D. S. Indu, “Underwater Fish Species Classification using Convolutional Neural Network and Deep Learning.” arXiv, May 25, 2018. doi: 10.48550/arXiv.1805.10106.
[18]C. Szegedy et al., “Going Deeper with Convolutions.” arXiv, Sep. 16, 2014. doi: 10.48550/arXiv.1409.4842.
[19]K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv, Apr. 10, 2015. doi: 10.48550/arXiv.1409.1556.
[20]K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” arXiv, Dec. 10, 2015. doi: 10.48550/arXiv.1512.03385.
[21]K. He, X. Zhang, S. Ren, and J. Sun, “Identity Mappings in Deep Residual Networks.” arXiv, Jul. 25, 2016. doi: 10.48550/arXiv.1603.05027.
[22]A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” arXiv, Apr. 16, 2017. Accessed: Mar. 15, 2024. [Online]. Available: http://arxiv.org/abs/1704.04861
[23]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks.” arXiv, Mar. 21, 2019. Accessed: Mar. 15, 2024. [Online]. Available: http://arxiv.org/abs/1801.04381
[24]“QGIS,” Wikipedia. Mar. 20, 2024. Accessed: Mar. 21, 2024. [Online]. Available: https://en.wikipedia.org/w/index.php?title=QGIS&oldid=1214715461
[25]“發現 QGIS.” Accessed: Mar. 21, 2024. [Online]. Available: https://qgis.org/zh-Hant/site/about/index.html
[26]N. Earth Science Data Systems, “GeoTIFF | Earthdata.” Accessed: Mar. 21, 2024. [Online]. Available: https://www.earthdata.nasa.gov/esdis/esco/standards-and-practices/geotiff
[27]C.-C. Chang, D.-T. Lin, Y. Ikema, and Y.-M. Ooi, “Drone Visual Positioning Algorithm with Convolutional Neural Network,” in 2023 IEEE 5th Eurasia Conference on IOT, Communication and Engineering (ECICE), Oct. 2023, pp. 15–17. doi: 10.1109/ECICE59523.2023.10383153.
[28]D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).” arXiv, Feb. 22, 2016. doi: 10.48550/arXiv.1511.07289.
[29]“tf.keras.layers.LeakyReLU | TensorFlow v2.10.1,” TensorFlow. Accessed: May 19, 2024. [Online]. Available: https://www.tensorflow.org/versions/r2.10/api_docs/python/tf/keras/layers/LeakyReLU
[30]“tf.keras.layers.ELU | TensorFlow v2.10.1,” TensorFlow. Accessed: May 19, 2024. [Online]. Available: https://www.tensorflow.org/versions/r2.10/api_docs/python/tf/keras/layers/ELU
[31]“The JPEG still picture compression standard | IEEE Journals & Magazine | IEEE Xplore.” Accessed: May 19, 2024. [Online]. Available: https://ieeexplore.ieee.org/document/125072


電子全文 電子全文(網際網路公開日期:20250731)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊