跳到主要內容

臺灣博碩士論文加值系統

(44.222.131.239) 您好!臺灣時間:2024/09/13 21:02
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝旻宗
研究生(外文):XIE, MIN-ZONG
論文名稱:基於語義特徵與輪速計融合之自動泊車中全景影像同時定位與建圖之研究
論文名稱(外文):Semantic Visual SLAM with Wheel Odometry Fusion for Automated Valet Parking in Around View Systems
指導教授:許志明許志明引用關係
指導教授(外文):HSU, CHIH-MING
口試委員:周仁祥李明哲許志明
口試委員(外文):CHOU, JEN-HSIANGLEE, MING-CHEHSU, CHIH-MING
口試日期:2024-07-26
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:機械工程系機電整合碩士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:104
中文關鍵詞:同時定位與地圖構建視覺語義學細化軌跡
外文關鍵詞:SLAMVisualizationSemanticsThinningTrajectory
相關次數:
  • 被引用被引用:0
  • 點閱點閱:16
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要 i
ABSTRACT ii
誌謝 iii
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 前言 1
1.2 研究動機 2
1.3 本論文貢獻 4
1.4 論文架構 6
第二章 文獻探討 8
2.1 傳統V-SLAM技術的發展 8
2.1.1 相機感測器模塊 9
2.1.2 前端模塊 10
2.1.3 後端模塊 10
2.1.4 閉環模塊 10
2.1.5 建圖模塊 11
2.1.6 總結 11
2.2 語義視覺SLAM在停車場域的發展 13
2.2.1 語義視覺SLAM在停車場域的應用 13
2.2.2 總結 14
2.3 停車場域的全景影像特徵偵測 15
2.3.1 影像語義分割 16
2.3.2 影像傳統預處理 16
2.3.3 總結 17
第三章 研究方法 19
3.1 AVM影像特徵處理 21
3.1.1 影像語義分割 21
3.1.2 影像特徵自適應二值化 27
3.1.3 影像格特徵骨架化 28
3.2 車輪里程計資訊提取 30
3.3 數據融合 31
3.3.1 影像座標轉換點雲座標 31
3.3.2 車輪里程計座標轉換點雲座標 32
3.3.3 車輪里程計自適應座標校正 34
3.4 AVM-SALM演算法架構 35
3.4.1 掃描與子地圖座標轉換 38
3.4.2 子地圖 39
3.4.3 掃描匹配 41
3.4.4 優化問題與殘差 42
3.4.5 回環檢測 44
3.5軌跡評估方法 45
第四章 實驗結果 46
4.1實驗設備 46
4.2實驗流程 54
4.3融合與非融合AVM-SLAM差異 62
4.4影像尺度恢復與驗證 72
4.5實驗結果軌跡分析 77
4.5.1建圖分析 77
4.5.2純定位分析 83
4.6總結 91
第五章 結論與未來展望 94
5.1結論 94
5.2未來展望 94
參考文獻 95
符號彙編 99
[1]J. Cheng, L. Zhang, Q. Chen, X. Hu, and J. Cai, "A review of visual SLAM methods for autonomous driving vehicles," Engineering Applications of Artificial Intelligence, vol. 114, 2022, p. 104992.
[2] X. Zou, C. Xiao, Y. Wen, et al., "Research status of VSLAM based on feature point method and direct method," Comput. Appl. Res., vol. 37, no. 5, 2020, pp. 1281–1291.
[3]C. Campos, R. Elvira, et al., "ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM," arXiv preprint arXiv:2007.11898, 2020.
[4]H. Liu, G. Zhang, and H. Bao, "A survey of monocular simultaneous localization and mapping," J. Comput.-Aided Des. Comput. Graph., vol. 28, no. 6, 2016, pp. 855–868.
[5]D. G. Lowe, "Distinctive image features from scale-invariant keypoints," Int. J. Comput. Vis., vol. 60, no. 2, 2004, pp. 91–110.
[6]H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," Comput. Vis. Image Understand., vol. 110, no. 3, 2008, pp. 346–359.
[7]J. Pan, Y. Pang, X. Li, Y. Yuan, and D. Tao, "A fast feature extraction method," in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, 2009, pp. 1797–1800.
[8]E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in Proc. IEEE Int. Conf. Comput. Vis., 2011.
[9]H. J. Chien, C. C. Chuang, C. Y. Chen, and R. Klette, "When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry," in Proc. IEEE Int. Conf. Image Vis. Comput. New Zealand (IVCNZ), 2016, pp. 1–6.
[10]N. Sunderhauf and P. Protzel, "Towards a robust back-end for pose graph SLAM," in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2012, pp. 1254–1261.
[11]F. Liu, "Research on SLAM back end optimization algorithm," Intell. Comput. Appl., vol. 9, no. 6, 2019, pp. 68–72. (In Chinese)
[12]M. Liang, H. Min, and R. Luo, "Overview of simultaneous localization and map creation based on graph optimization," Robotics, no. 4, 2013, pp. 118–130.
[13]P. Newman and K. Ho, "SLAM- Loop closing with visually salient features," in Proc. IEEE Int. Conf. Robot. Autom., 2005.
[14]S. Simhon and G. Dudek, "A global topological map formed by local metric maps," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), vol. 3, 1998, pp. 1708–1714.
[15]F. Blochliger, M. Fehr, M. Dymczyk, T. Schneider, and R. Siegwart, "Topomap: Topological mapping and navigation based on visual SLAM maps," in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2018, pp. 3818–3825.
[16]T. Qin, T. Chen, Y. Chen, and Q. Su, "AVP-SLAM: Semantic Visual Mapping and Localization for Autonomous Vehicles in the Parking Lot," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2020, pp. 5939–5945.
[17]Z. Xiang, A. Bao, and J. Su, "Hybrid Bird’s-Eye Edge Based Semantic Visual SLAM for Automated Valet Parking," in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2021, pp. 11546–11552.
[18]H. Zhao, et al., "ICNet for Real-Time Semantic Segmentation on High-Resolution Images," in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 405–420.
[19]J. Xu, Z. Xiong, and S. P. Bhattacharyya, "InverseForm: A Loss Function for Structured Boundary-Aware Segmentation," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023, pp. 19529–19539.
[20]J. Canny, "A Computational Approach to Edge Detection," IEEE Trans. Pattern Anal. Mach. Intell., vol. 8, no. 6, 1986, pp. 679–698.
[21]M. Cheriet, N. Kharma, C. L. Liu, and C. Suen, "Thinning," in Character Recognition Systems: A Guide for Students and Practitioners. Wiley-IEEE Press, 2007.
[22]W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-time loop closure in 2D LIDAR SLAM," in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2016, pp. 1271–1278.
[23]S. Borse, Y. Wang, Y. Zhang, and F. Porikli, "InverseForm: A Loss Function for Structured Boundary-Aware Segmentation," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 5897–5907.
[24]J. Hou, et al., "SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous Driving," in Proc. IEEE Int. Conf. Intell. Transp. Syst. (ITSC), 2022, pp. 2265–2271.
[25]N. Otsu, "A threshold selection method from gray-level histograms," IEEE Trans. Syst. Man Cybern., vol. 9, no. 1, 1979, pp. 62–66.
[26]Kvaser, "Kvaser USBcan Light 2xHS," 2020. Available:
https://kvaser.com/product/kvaser-usbcan-light-2xhs/
[27]J. M. Bland and D. G. Altman, "Statistics notes: The odds ratio," BMJ, vol. 320, 2000, p. 1468.
[28]A. Nüchter, M. Bleier, J. Schauer, and P. Janotta, "Improving Google's Cartographer 3D Mapping by Continuous-Time SLAM," Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XLII-2/W3, 2017, pp. 543–549.
[29]M. Grupp, "EVO," 2020. Available:
https://github.com/MichaelGrupp/evo
[30]T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, "LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2020, pp. 5135–5143.
[31]AliExpress, "AR0143," 2020. Available:
https://www.aliexpress.com/item/1005001798329168.html
[32]Einfochips, "CV22," 2020. Available:
https://www.einfochips.com/ambarella/
[33]NXP, "S32K," 2020. Available:
https://www.nxp.com/products/processors-and-microcontrollers/s32-automotive-platform/s32k-auto-general-purpose-mcus:S32K-MCUS
[34]Ouster, "Ouster OS1-128 LiDAR," 2020. Available:
https://ouster.com/insights/blog/introducing-the-os-1-128-lidar-sensor
[35]MicroStrain, "3DM-GX5-IMU," 2020. Available:
https://www.microstrain.com/inertial-sensors/3dm-gx5-10
[36]VBox Automotive, "RTK Base Station," 2020. Available:
https://www.vboxautomotive.co.uk/index.php/en/products/rtk-solutions/rtk-base-station
[37]VBox Automotive, "VBOX 3i 100 Hz GNSS Data Logger," 2020. Available: https://www.vboxautomotive.co.uk/index.php/en/products/dataloggers/vb3i
[38]VBox Automotive, "VBOX CAN Hub," 2020. Available:
https://www.vboxautomotive.co.uk/index.php/en/vbox-can-hub
[39]VBox Automotive, "RLACS324/RLACS320," 2020. Available:
https://www.vboxautomotive.co.uk/index.php/en/antennas-mounts
[40]VBox Automotive, "IMU05," 2020. Available:
https://www.vboxautomotive.co.uk/index.php/en/inertial-measurement-unit

電子全文 電子全文(網際網路公開日期:20290807)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top