(3.237.20.246) 您好!臺灣時間:2021/04/15 10:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:莊定為
研究生(外文):Ting-Wei Zhuang
論文名稱:基於機器學習的新穎雷達定位技術
論文名稱(外文):A NOVEL LEARNING-BASED LIDAR LOCALIZATION ALGORITHM
指導教授:花凱龍
指導教授(外文):Kai-Lung Hua
口試委員:花凱龍陳永耀楊朝龍陸敬互簡士哲
口試委員(外文):Kai-Lung HuaYung-Yao ChenChao-Lung YangChing-Hu LuShih-Che Chien
口試日期:2019-07-31
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:42
中文關鍵詞:雷達定位深度學習時序卷積神經網路深度可分離卷積
外文關鍵詞:LiDARlocalizationdeep learningtemporal convolutional neural networkseparable depthwise convolution
相關次數:
  • 被引用被引用:0
  • 點閱點閱:45
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近年來,無人自動駕駛車為全球多國重點發展領域,對於車輛安全性能的各項改善對交通事故的減少有一定的影響力。自動駕駛也已經發展迅速,這些系統需要定位車輛位置,無論白天或晚上都要準確地被定位。我們基於機器學習的新穎雷達定位技術在不需要視覺影像的輔助下,透過深度學習技術暨光學雷達所獲得的資訊來執行定位。我們首先在所收集的光學雷達點雲資料中,確認較有代表性之特徵關鍵點,然後於每個關鍵點,收集附近64個點雲的資訊之深度學習特徵資訊,接著透過三維卷積網路得到位移機率值,最後透過時間域資訊的整合得到最終結果,完成雷達定位。為了提高我們網絡的速度,我們利用深度可分離卷積結構減少在三維卷積時的運算速度、利用時間卷積網路代替傳統在時序資料中使用的遞迴神經網路已達成快速計算。我們的實驗證明,我們的網絡在基於深度學習的雷達定位中具有好的表現。
Self-driving systems need to be able to localize its position with high accuracy regardless of whether it is during the day or night. This means because of the sensitivity to the lighting conditions, we cannot rely on ordinary cameras to sense the surrounding environment. A solution to replace the images is to use light detection and ranging sensor (LiDAR) to generate a three-dimensional point cloud of each point representing the distance to the sensor. In this paper, we propose a novel method for LiDAR localization using the three-dimensional point clouds generated by the LiDAR, a pre-build map, and a predicted pose as inputs and achieves centimeter-level localization accuracy. Our approach first selects a certain number of the online point cloud as key points. We then extract learned features from convolutional neural networks in order to train these neural networks to localize lidar. Our proposed method achieved significant improvements in terms of speed over prior state-of-the-art methods.
Abstract in Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 PROPOSED APPROACH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1 Key Point Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Depthwise Separable Convolution . . . . . . . . . . . . . . . . . . . . . 9
3.3 Probability Infer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Temporal Convlutional Network . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 EXPERIMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1 Implementation Condition . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Experimental result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
[1] J. Levinson, M. Montemerlo, and S. Thrun, “Map-based precision vehicle localization in urban environments,” in Robotics: Science and Systems (RSS), 2007.
[2] J. Levinson and S. Thrun, “Robust vehicle localization in urban environments using probabilistic
maps,” 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 4372–4378,
2010.
[3] R. Kümmerle, D. Hähnel, D. Dolgov, S. Thrun, and W. Burgard, “Autonomous driving in a multilevel parking structure,” 2009 IEEE International Conference on Robotics and Automation (ICRA),
pp. 3395–3400, 2009.
[4] R. W. Wolcott and R. M. Eustice, “Fast lidar localization using multiresolution gaussian mixture
maps,” 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2814–2821,
2015.
[5] R. W Wolcott and R. Eustice, “Robust lidar localization using multiresolution gaussian mixture
maps for autonomous driving,” The International Journal of Robotics Research (IJRR), vol. 36,
p. 027836491769656, 04 2017.
[6] H. Kim, B. Liu, C. Yuan Goh, S. Lee, and H. Myung, “Robust vehicle localization using entropyweighted particle filter-based data fusion of vertical and road intensity information for a large scale
urban area,” IEEE Robotics and Automation Letters (RA-L), vol. PP, pp. 1–1, 02 2017.
[7] R. Dubé, M. G. Gollub, H. Sommer, I. Gilitschenski, R. Siegwart, C. Cadena, and J. Nieto,
“Incremental-segment-based localization in 3-d point clouds,” IEEE Robotics and Automation Letters (RA-L), vol. 3, pp. 1832–1839, 2018.
[8] G. Wan, X. Yang, R. Cai, H. Li, H. J. Wang, and S. Song, “Robust and precise vehicle localization
based on multi-sensor fusion in diverse city scenes,” 2018 IEEE International Conference on Robotics
and Automation (ICRA), pp. 4670–4677, 2017.
[9] P. Besl and H. McKay, “A method for registration of 3-d shapes. ieee trans pattern anal mach intell,”
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 14, pp. 239–256, 03
1992.
[10] W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3-net: Towards learning based lidar localization
for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognitio (CVPR), pp. 6389–6398, 2019.
[11] M. Weinmann, B. Jutzi, S. Hinz, and C. Mallet, “Semantic point cloud interpretation based on optimal
neighborhoods, relevant features and efficient classifiers,” ISPRS Journal of Photogrammetry and
Remote Sensing, vol. 105, 02 2015.
31
[12] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification
and segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
pp. 77–85, 2016.
[13] S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent
networks for sequence modeling,” arXiv:1803.01271, 2018.
[14] P. Besl and H. McKay, “A method for registration of 3-d shapes. ieee trans pattern anal mach intell,”
IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI), vol. 14, pp. 239–256, 03
1992.
[15] A. Segal, D. Hähnel, and S. Thrun, “Generalized-icp,” 06 2009.
[16] P. Biber and W. Straßer, “The normal distributions transform: A new approach to laser scan matching,”
vol. 3, pp. 2743 – 2748 vol.3, 11 2003.
[17] H. Yin, L. Tang, X. Ding, Y. Wang, and R. Xiong, “Locnet: Global localization in 3d point clouds for
mobile vehicles,” 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 728–733, 2017.
[18] P. Wang, R. Yang, B. Cao, W. Xu, and Y. Lin, “Dels-3d: Deep localization and segmentation with a
3d semantic map,” 2018 IEEE/CVF Conference on Computer Vision and Pattern RecognitionCVPR,
pp. 5860–5869, 2018.
[19] A. Kendall, M. K. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof
camera relocalization,” 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2938–
2946, 2015.
[20] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam,
“Mobilenets: Efficient convolutional neural networks for mobile vision applications,” ArXiv, vol. abs/
1704.04861, 2017.
[21] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell,
“Long-term recurrent convolutional networks for visual recognition and description,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 39, pp. 677–691, 2014.
[22] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International
Journal of Robotics Research (IJRR), 2013.
電子全文 電子全文(網際網路公開日期:20240823)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔