|
[1] Y. Lin, F. Gao, T. Qin, W. Gao, T. Liu, W. Wu, Z. Yang, and S. Shen, “Autonomous aerial navigation using monocular visual-inertial fusion,” Journal of Field Robotics, vol. 35, no. 1, pp. 23–51, 2018. [2] S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Siegwart, “Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments,” in 2012 IEEE International Conference on Robotics and Automation, May 2012, pp. 957–964. [3] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframebased visual-inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015. [4] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in 2019 International Conference on Robotics and Automation (ICRA), May 2019, pp. 3144–3150. [5] T. Qin, P. Li, and S. Shen, “Vin-smono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, Aug 2018. [6] C. Qin, H. Ye, C. E. Pranata, J. Han, and M. Liu, “LINS: A lidar-inertial state estimator for robust and fast navigation,” CoRR, vol. abs/1907.02233, 2019. [7] T. Shan and B. Englot, “Legoloam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018, pp. 4758–4765. [8] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time,” in Proceedings of Robotics: Science and Systems Conference, July 2014. [9] P. Geneva, K. Eckenhoff, Y. Yang, and G. Huang, “Lips: Lidar-inertial 3d plane slam,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 123–130. [10] J. Lin and F. Zhang, “Loam_livox: A fast, robust, high-precision lidar odometry and mapping package for lidars of small fov,” arXiv preprint arXiv:1909.06700, 2019. [11] T. Lupton and S. Sukkarieh, “Visualinertialaided navigation for high-dynamic motion in built environments without initial conditions,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 61–76, 2012. [12] S. Shen, N. Michael, and V. Kumar, “Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft mavs,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 5303–5310. [13] J. Solà, “Quaternion kinematics for the error-state kalman filter,” CoRR, vol. abs/ 1711.02508, 2017. [14] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 913 2011. [15] M. Berg, de, O. Cheong, M. Kreveld, van, and M. Overmars, Computational geometry : algorithms and applications, 3rd ed. Germany: Springer, 2008. [16] S. Agarwal, K. Mierle, and Others, “Ceres solver,” . [17] T. Liu and S. Shen, “Spline-based initialization of monocular visual-inertial state estimators at high altitude,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2224–2231, 2017. [18] G. Guennebaud, B. Jacob et al., “Eigen v3,” http://eigen.tuxfamily.org, 2010. [19] “Robot operating system,” http://www.ros.org/. [20] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [21] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
|