跳到主要內容

臺灣博碩士論文加值系統

(44.200.171.156) 您好!臺灣時間:2023/03/27 10:15
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林春榮
論文名稱:RTLIO 實時緊密耦合激光雷達慣性里程計
論文名稱(外文):RTLIO-Real-­Time Tightly Coupled Lidar Inertial Odometry
指導教授:程登湖
指導教授(外文):Cheng, Teng-Hu
口試委員:程登湖陳宗麟王傑智
口試委員(外文):Cheng, Teng-HuChen, Tsung-LinWang, Chieh-Chih
口試日期:2020-08-06
學位類別:碩士
校院名稱:國立交通大學
系所名稱:工學院機器人碩士學位學程
學門:工程學門
學類:其他工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:109
語文別:英文
論文頁數:76
中文關鍵詞:感測器整合特徵萃取光達處理同步定位與地圖建置前端處理最佳化
外文關鍵詞:sensor fusionfeature extractionlidar processingSLAMfront-end procedureoptimization
相關次數:
  • 被引用被引用:0
  • 點閱點閱:103
  • 評分評分:
  • 下載下載:4
  • 收藏至我的研究室書目清單書目收藏:0
精準的定位對於機器人來說是基本需求,即使無GPS 等其他外部定位裝置環境下,機器人都需要知道自身位置才能在各種環境下進行任務。當飛行載具需要位置資訊來穩定姿態時,除了精準定位外,更需要高頻且即時不延遲的位置資訊。這篇論文展示出一套演算法,透過光達與慣行量測元件整合形成最佳化問題,同時估測位置與慣性量測元件測量值的系統偏差,產生高頻且即時的定位資訊給飛行載具。
Precise localization information is the essential need for robots. In various environments, robots need pose information to perform tasks without using GPS or other external localization systems. In the application of aerial vehicles, it has realtime pose requirements so that aerial vehicles can be controlled to be stable. The algorithm in this thesis fuses lidar and IMU sensors and designs an optimization problem to provide real-time localization information to aerial vehicles
by simultaneously estimating pose and IMU bias.
摘要i
Abstract ii
Acknowledgment iii
Table of Contents iv
List of Algorithms vii
List of Tables viii
List of Figures ix
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Background and Related Works . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 System Architecture 4
2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Method 6
3.1 Time Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Distortion Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 IMU Preprocess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5 Correction of Preintegration
. . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.6 Construct Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Implementation 19
4.1 Frame to Frame Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
iv
4.2 Estimator Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Rotational Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.2 Linear Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.3 Completing Initialization . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.2 IMU Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3.3 Lidar Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 Marginalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 Experiments 29
5.1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Indoor Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.1 Precision and Time Cost . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.2 Flight in Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.3 Flight in Corridor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.4 Mapping Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 KITTI Dataset Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.1 RPE result of the KITTI dataset . . . . . . . . . . . . . . . . . . . . . 42
5.3.2 Analysis of the KITTI dataset . . . . . . . . . . . . . . . . . . . . . . 43
5.3.3 Trajectory Result of the KITTI dataset . . . . . . . . . . . . . . . . . . 47
64
65
6 Conclusion and Future Works
References
Appendix 67
A.1 Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
A.2 Jacobian of IMU Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
A.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
A.3 Jacobian of Lidar Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.3.1 Derivation of FrameToFrame
Matching in Section 4.1 with Edge Features
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
A.3.2 Derivation of FrameToFrame
Matching in Section 4.1 with Flat Plane
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
A.3.3 Derivation of FrameToMap
Matching in Section 4.3.3 with Edge Features
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A.3.4 Derivation of FrameToMap
Matching in Section 4.3.3 with Flat Plane
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
[1] Y. Lin, F. Gao, T. Qin, W. Gao, T. Liu, W. Wu, Z. Yang, and S. Shen, “Autonomous aerial navigation using monocular visual-inertial fusion,” Journal of Field Robotics, vol. 35, no. 1, pp. 23–51, 2018.
[2] S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Siegwart, “Real-time onboard visual-inertial
state estimation and self-calibration of mavs in unknown environments,” in 2012 IEEE International Conference on Robotics and Automation, May 2012, pp. 957–964.
[3] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframebased visual-inertial odometry using nonlinear optimization,” The International Journal of
Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
[4] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in 2019 International Conference on Robotics and Automation (ICRA), May 2019, pp.
3144–3150.
[5] T. Qin, P. Li, and S. Shen, “Vin-smono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, Aug 2018.
[6] C. Qin, H. Ye, C. E. Pranata, J. Han, and M. Liu, “LINS: A lidar-inertial state estimator for robust and fast navigation,” CoRR, vol. abs/1907.02233, 2019.
[7] T. Shan and B. Englot, “Legoloam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018, pp. 4758–4765.
[8] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time,” in Proceedings of Robotics: Science and Systems Conference, July 2014.
[9] P. Geneva, K. Eckenhoff, Y. Yang, and G. Huang, “Lips: Lidar-inertial 3d plane slam,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 123–130.
[10] J. Lin and F. Zhang, “Loam_livox: A fast, robust, high-precision lidar odometry and mapping package for lidars of small fov,” arXiv preprint arXiv:1909.06700, 2019.
[11] T. Lupton and S. Sukkarieh, “Visualinertialaided
navigation for high-dynamic motion in built environments without initial conditions,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 61–76, 2012.
[12] S. Shen, N. Michael, and V. Kumar, “Tightly-coupled
monocular visual-inertial fusion for autonomous flight of rotorcraft mavs,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 5303–5310.
[13] J. Solà, “Quaternion kinematics for the error-state
kalman filter,” CoRR, vol. abs/
1711.02508, 2017.
[14] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 913
2011.
[15] M. Berg, de, O. Cheong, M. Kreveld, van, and M. Overmars, Computational geometry : algorithms and applications, 3rd ed. Germany: Springer, 2008.
[16] S. Agarwal, K. Mierle, and Others, “Ceres solver,” .
[17] T. Liu and S. Shen, “Spline-based initialization of monocular visual-inertial state estimators at high altitude,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2224–2231, 2017.
[18] G. Guennebaud, B. Jacob et al., “Eigen v3,” http://eigen.tuxfamily.org, 2010.
[19] “Robot operating system,” http://www.ros.org/.
[20] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[21] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊