|
References [1] B. Shahian Jahromi, T. Tulabandhula, and S. Cetin, “Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles,” Sensors, vol. 19, no. 20, p. 4357, Oct. 2019, doi: 10.3390/s19204357. [2] “The Ultimate Sensor Battle: Lidar vs Radar - Intellias Automotive - Medium.” https://medium.com/@intellias/the-ultimate-sensor-battle-lidar-vs-radar-2ee0fb9de5da (accessed Jul. 03, 2020). [3] “LIDAR and Time of Flight, Part 2: Operation.” https://www.microcontrollertips.com/lidar-and-time-of-flight-part-2-operation/ (accessed Jul. 06, 2020). [4] J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2011, pp. 163–168, doi: 10.1109/IVS.2011.5940562. [5] “NVIDIA Jetson Nano Developer Kit | NVIDIA Developer.” https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed Jun. 17, 2020). [6] M. G. Ocando, N. Certad, S. Alvarado, and Á. Terrones, “Autonomous 2D SLAM and 3D mapping of an environment using a single 2D LIDAR and ROS,” in Proceedings - 2017 LARS 14th Latin American Robotics Symposium and 2017 5th SBR Brazilian Symposium on Robotics, LARS-SBR 2017 - Part of the Robotics Conference 2017, Dec. 2017, vol. 2017-December, pp. 1–6, doi: 10.1109/SBR-LARS-R.2017.8215333. [7] C. Reymann and S. Lacroix, “Improving LiDAR point cloud classification using intensities and multiple echoes,” in IEEE International Conference on Intelligent Robots and Systems, Dec. 2015, vol. 2015-December, pp. 5122–5128, doi: 10.1109/IROS.2015.7354098. [8] “DBSCAN: Density-Based Clustering Essentials - Datanovia.” https://www.datanovia.com/en/lessons/dbscan-density-based-clustering-essentials/ (accessed Jun. 18, 2020). [9] J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2011, pp. 163–168, doi: 10.1109/IVS.2011.5940562. [10] E.-K. Lee, M. Gerla, G. Pau, U. Lee, and J.-H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” International Journal of Distributed Sensor Networks, vol. 12, no. 9, p. 155014771666550, Sep. 2016, doi: 10.1177/1550147716665500. [11] E.-K. Lee, M. Gerla, G. Pau, U. Lee, and J.-H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” Research Article International Journal of Distributed Sensor Networks, vol. 12, no. 9, 2016, doi: 10.1177/1550147716665500. [12] N. Akai et al., “Autonomous driving based on accurate localization using multilayer LiDAR and dead reckoning,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Mar. 2018, vol. 2018-March, pp. 1–6, doi: 10.1109/ITSC.2017.8317797. [13] J. Wei, J. M. Snider, J. Kim, J. M. Dolan, R. Rajkumar, and B. Litkouhi, “Towards a viable autonomous driving research platform,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2013, pp. 763–770, doi: 10.1109/IVS.2013.6629559. [14] Z. Chen and X. Huang, “End-To-end learning for lane keeping of self-driving cars,” in IEEE Intelligent Vehicles Symposium, Proceedings, Jul. 2017, pp. 1856–1860, doi: 10.1109/IVS.2017.7995975. [15] J. Koutník, G. Cuccu, J. Schmidhuber, and F. Gomez, “Evolving large-scale neural networks for vision-based reinforcement learning,” in GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference, 2013, pp. 1061–1068, doi: 10.1145/2463372.2463509. [16] A. el Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep Reinforcement Learning framework for Autonomous Driving,” IS and T International Symposium on Electronic Imaging Science and Technology, pp. 70–76, Apr. 2017, doi: 10.2352/ISSN.2470-1173.2017.19.AVM-023. [17] M. Bojarski et al., “End to End Learning for Self-Driving Cars,” Apr. 2016, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1604.07316. [18] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015, doi: 10.1038/nature14236. [19] K. Abboud, H. A. Omar, and W. Zhuang, “Interworking of DSRC and Cellular Network Technologies for V2X Communications: A Survey,” IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9457–9470, Dec. 2016, doi: 10.1109/TVT.2016.2591558. [20] J. Wang, Y. Shao, Y. Ge, and R. Yu, “A survey of vehicle to everything (V2X) testing,” Sensors (Switzerland), vol. 19, no. 2. MDPI AG, Jan. 02, 2019, doi: 10.3390/s19020334. [21] M. Amadeo, C. Campolo, and A. Molinaro, “Information-centric networking for connected vehicles: A survey and future perspectives,” IEEE Communications Magazine, vol. 54, no. 2, pp. 98–104, Feb. 2016, doi: 10.1109/MCOM.2016.7402268. [22] E. K. Lee, M. Gerla, G. Pau, U. Lee, and J. H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” International Journal of Distributed Sensor Networks, vol. 12, no. 9, Sep. 2016, doi: 10.1177/1550147716665500. [23] F. Liu, C. Shen, G. Lin, and I. Reid, “Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp. 2024–2039, Feb. 2015, doi: 10.1109/TPAMI.2015.2505283. [24] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Nov. 2017, vol. 2017-Janua, pp. 77–85, doi: 10.1109/CVPR.2017.16. [25] J. Choi, “Hybrid map-based SLAM using a Velodyne laser scanner,” in 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014, Nov. 2014, pp. 3082–3087, doi: 10.1109/ITSC.2014.6958185. [26] D. Xu, D. Anguelov, and A. Jain, “PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 244–253, Nov. 2017, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1711.10871. [27] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum PointNets for 3D Object Detection from RGB-D Data,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 918–927, Nov. 2017, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1711.08488. [28] H. F. Murcia, M. F. Monroy, and L. F. Mora, “3D Scene Reconstruction Based on a 2D Moving LiDAR,” in Communications in Computer and Information Science, Nov. 2018, vol. 942, pp. 295–308, doi: 10.1007/978-3-030-01535-0_22. [29] R. W. Wolcott and R. M. Eustice, “Visual localization within LIDAR maps for automated urban driving,” in IEEE International Conference on Intelligent Robots and Systems, Oct. 2014, pp. 176–183, doi: 10.1109/IROS.2014.6942558. [30] C. McManus, W. Churchill, A. Napier, B. Davis, and P. Newman, “Distraction suppression for vision-based pose estimation at city scales,” in Proceedings - IEEE International Conference on Robotics and Automation, 2013, pp. 3762–3769, doi: 10.1109/ICRA.2013.6631106. [31] V. de Silva, J. Roche, and A. Kondoz, “Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots,” Sensors (Switzerland), vol. 18, no. 8, Aug. 2018, doi: 10.3390/s18082730. [32] H. G. Norbye, “Camera-Lidar sensor fusion in real time for autonomous surface vehicles,” 2019. Accessed: Jun. 16, 2020. [Online]. [33] F. Zhang, D. Clarke, and A. Knoll, “Vehicle detection based on LiDAR and camera fusion,” in 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014, Nov. 2014, pp. 1620–1625, doi: 10.1109/ITSC.2014.6957925. [34] C. Wang, M. Ji, J. Wang, W. Wen, T. Li, and Y. Sun, “An improved DBSCAN method for LiDAR data segmentation with automatic Eps estimation,” Sensors (Switzerland), vol. 19, no. 1, Jan. 2019, doi: 10.3390/s19010172. [35] P. Wei, L. Cagle, T. Reza, J. Ball, and J. Gafford, “LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System,” Electronics, vol. 7, no. 6, p. 84, May 2018, doi: 10.3390/electronics7060084. [36] “RPLIDAR-A1 360°Laser Range Scanner _ Domestic Laser Range Scanner|SLAMTEC.” https://www.slamtec.com/en/Lidar/A1 (accessed Jun. 17, 2020). [37] “2.3. Clustering — scikit-learn 0.23.1 documentation.” https://scikit-learn.org/stable/modules/clustering.html (accessed Jul. 06, 2020). [38] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, Sep. 2013, doi: 10.1177/0278364913491297. [39] E. Acar, T. G. Kolda, D. M. Dunlavy, and M. Morup, “Scalable Tensor Factorizations for Incomplete Data,” Chemometrics and Intelligent Laboratory Systems, vol. 106, no. 1, pp. 41–56, May 2010, doi: 10.1016/j.chemolab.2010.08.004. [40] D. Garcia, “Robust smoothing of gridded data in one and higher dimensions with missing values,” Computational Statistics and Data Analysis, vol. 54, no. 4, pp. 1167–1178, Apr. 2010, doi: 10.1016/j.csda.2009.09.020.
|