(54.236.58.220) 您好!臺灣時間:2021/03/09 16:59
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黎明海
研究生(外文):Le Minh Hai
論文名稱:一個應用於自動駕駛使用三維點雲分割技術達成效能強化的光達影像融合系統之設計與驗證
論文名稱(外文):Design and Validation of an Empower Lidar-Camera Sensing-Fusion System by 3D Point Cloud Segmentation for Autonomous Driving System
指導教授:鄭經華劉堂傑
指導教授(外文):CHENG CHING-HWALIU TANG-CHIEH
口試委員:劉堂傑王行健王行健黃宗柱郭峻因
口試日期:2020-06-24
學位類別:碩士
校院名稱:逢甲大學
系所名稱:電子工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:81
中文關鍵詞:3D重建融合激光雷達相機點雲分割賦予形象
外文關鍵詞:3D reconstructionfusion Lidar-camerapoint cloud segmentationempower image
相關次數:
  • 被引用被引用:0
  • 點閱點閱:35
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
It is generally to accept that self-driving cars have increased in popularity, which has revolutionized the field of self-driving cars in the past few decades. Despite a lot of success and development, there are still some challenges in combine information from sensors. To solve the problems, a 3D environment construction system and the combination of sensors are presented in this article. The system will assist the driver to make better decisions as well as warn and overcome the limitations that single sensor brings. A transform matrix describing the relationship between the sensors is given, ensuring the values of distances are calculated correctly. From experiments, the Pointcloud based segmentation method gives impressive results. The main distribution of this study is to provide us with an informative picture of it with high accuracy and works well under changing environmental conditions.
It is generally to accept that self-driving cars have increased in popularity, which has revolutionized the field of self-driving cars in the past few decades. Despite a lot of success and development, there are still some challenges in combine information from sensors. To solve the problems, a 3D environment construction system and the combination of sensors are presented in this article. The system will assist the driver to make better decisions as well as warn and overcome the limitations that single sensor brings. A transform matrix describing the relationship between the sensors is given, ensuring the values of distances are calculated correctly. From experiments, the Pointcloud based segmentation method gives impressive results. The main distribution of this study is to provide us with an informative picture of it with high accuracy and works well under changing environmental conditions.
Contents
ACKNOWLEDGEMENTS III
CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 BACKGROUND 6
1.2.1 Lidar 6
1.2.2 Embedded System 9
1.2.3 Point cloud data clustering 12
1.3 CONTRIBUTION 13
CHAPTER 2 RELATED WORK 14
2.1 SELF-DRIVING CAR 14
2.2 SEGMENTATION OF LIDAR POINTCLOUD 16
2.3 OTHER RELATED RESEARCHES 17
CHAPTER 3 SYSTEM DESIGN AND ALGORITHM 20
3.1 3D SYSTEM MODEL 21
3.2 CALIBRATION LIDAR AND CAMERA 28
3.3 CONFIG SYSTEM 33
3.4 POINTCLOUD CLUSTERING 38
CHAPTER 4 EXPERIMENT 45
4.1 DATA COLLECTIONS 45
4.2 DATA PROCESSING 46
4.3 EVALUATE SYSTEM 51
CHAPTER 5 RESULTS 58
5.1 3D RECONSTRUCTION WITH 2D LIDAR 58
5.2 COMBINE LIDAR AND CAMERA 62
5.3 SEGMENTATION OF 3D POINTCLOUD 66
5.4 COMPARISON BETWEEN THE SYSTEM AND RELATED WORK 69
CHAPTER 6 DISCUSSION 72
CHAPTER 7 CONCLUSION AND FUTURE WORK 74
7.1 CONCLUSION 74
7.2 FUTURE WORK 75
REFERENCES 77




References
[1] B. Shahian Jahromi, T. Tulabandhula, and S. Cetin, “Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles,” Sensors, vol. 19, no. 20, p. 4357, Oct. 2019, doi: 10.3390/s19204357.
[2] “The Ultimate Sensor Battle: Lidar vs Radar - Intellias Automotive - Medium.” https://medium.com/@intellias/the-ultimate-sensor-battle-lidar-vs-radar-2ee0fb9de5da (accessed Jul. 03, 2020).
[3] “LIDAR and Time of Flight, Part 2: Operation.” https://www.microcontrollertips.com/lidar-and-time-of-flight-part-2-operation/ (accessed Jul. 06, 2020).
[4] J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2011, pp. 163–168, doi: 10.1109/IVS.2011.5940562.
[5] “NVIDIA Jetson Nano Developer Kit | NVIDIA Developer.” https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed Jun. 17, 2020).
[6] M. G. Ocando, N. Certad, S. Alvarado, and Á. Terrones, “Autonomous 2D SLAM and 3D mapping of an environment using a single 2D LIDAR and ROS,” in Proceedings - 2017 LARS 14th Latin American Robotics Symposium and 2017 5th SBR Brazilian Symposium on Robotics, LARS-SBR 2017 - Part of the Robotics Conference 2017, Dec. 2017, vol. 2017-December, pp. 1–6, doi: 10.1109/SBR-LARS-R.2017.8215333.
[7] C. Reymann and S. Lacroix, “Improving LiDAR point cloud classification using intensities and multiple echoes,” in IEEE International Conference on Intelligent Robots and Systems, Dec. 2015, vol. 2015-December, pp. 5122–5128, doi: 10.1109/IROS.2015.7354098.
[8] “DBSCAN: Density-Based Clustering Essentials - Datanovia.” https://www.datanovia.com/en/lessons/dbscan-density-based-clustering-essentials/ (accessed Jun. 18, 2020).
[9] J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2011, pp. 163–168, doi: 10.1109/IVS.2011.5940562.
[10] E.-K. Lee, M. Gerla, G. Pau, U. Lee, and J.-H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” International Journal of Distributed Sensor Networks, vol. 12, no. 9, p. 155014771666550, Sep. 2016, doi: 10.1177/1550147716665500.
[11] E.-K. Lee, M. Gerla, G. Pau, U. Lee, and J.-H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” Research Article International Journal of Distributed Sensor Networks, vol. 12, no. 9, 2016, doi: 10.1177/1550147716665500.
[12] N. Akai et al., “Autonomous driving based on accurate localization using multilayer LiDAR and dead reckoning,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Mar. 2018, vol. 2018-March, pp. 1–6, doi: 10.1109/ITSC.2017.8317797.
[13] J. Wei, J. M. Snider, J. Kim, J. M. Dolan, R. Rajkumar, and B. Litkouhi, “Towards a viable autonomous driving research platform,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2013, pp. 763–770, doi: 10.1109/IVS.2013.6629559.
[14] Z. Chen and X. Huang, “End-To-end learning for lane keeping of self-driving cars,” in IEEE Intelligent Vehicles Symposium, Proceedings, Jul. 2017, pp. 1856–1860, doi: 10.1109/IVS.2017.7995975.
[15] J. Koutník, G. Cuccu, J. Schmidhuber, and F. Gomez, “Evolving large-scale neural networks for vision-based reinforcement learning,” in GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference, 2013, pp. 1061–1068, doi: 10.1145/2463372.2463509.
[16] A. el Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep Reinforcement Learning framework for Autonomous Driving,” IS and T International Symposium on Electronic Imaging Science and Technology, pp. 70–76, Apr. 2017, doi: 10.2352/ISSN.2470-1173.2017.19.AVM-023.
[17] M. Bojarski et al., “End to End Learning for Self-Driving Cars,” Apr. 2016, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1604.07316.
[18] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015, doi: 10.1038/nature14236.
[19] K. Abboud, H. A. Omar, and W. Zhuang, “Interworking of DSRC and Cellular Network Technologies for V2X Communications: A Survey,” IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9457–9470, Dec. 2016, doi: 10.1109/TVT.2016.2591558.
[20] J. Wang, Y. Shao, Y. Ge, and R. Yu, “A survey of vehicle to everything (V2X) testing,” Sensors (Switzerland), vol. 19, no. 2. MDPI AG, Jan. 02, 2019, doi: 10.3390/s19020334.
[21] M. Amadeo, C. Campolo, and A. Molinaro, “Information-centric networking for connected vehicles: A survey and future perspectives,” IEEE Communications Magazine, vol. 54, no. 2, pp. 98–104, Feb. 2016, doi: 10.1109/MCOM.2016.7402268.
[22] E. K. Lee, M. Gerla, G. Pau, U. Lee, and J. H. Lim, “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs,” International Journal of Distributed Sensor Networks, vol. 12, no. 9, Sep. 2016, doi: 10.1177/1550147716665500.
[23] F. Liu, C. Shen, G. Lin, and I. Reid, “Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp. 2024–2039, Feb. 2015, doi: 10.1109/TPAMI.2015.2505283.
[24] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Nov. 2017, vol. 2017-Janua, pp. 77–85, doi: 10.1109/CVPR.2017.16.
[25] J. Choi, “Hybrid map-based SLAM using a Velodyne laser scanner,” in 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014, Nov. 2014, pp. 3082–3087, doi: 10.1109/ITSC.2014.6958185.
[26] D. Xu, D. Anguelov, and A. Jain, “PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 244–253, Nov. 2017, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1711.10871.
[27] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum PointNets for 3D Object Detection from RGB-D Data,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 918–927, Nov. 2017, Accessed: Jun. 17, 2020. [Online]. Available: http://arxiv.org/abs/1711.08488.
[28] H. F. Murcia, M. F. Monroy, and L. F. Mora, “3D Scene Reconstruction Based on a 2D Moving LiDAR,” in Communications in Computer and Information Science, Nov. 2018, vol. 942, pp. 295–308, doi: 10.1007/978-3-030-01535-0_22.
[29] R. W. Wolcott and R. M. Eustice, “Visual localization within LIDAR maps for automated urban driving,” in IEEE International Conference on Intelligent Robots and Systems, Oct. 2014, pp. 176–183, doi: 10.1109/IROS.2014.6942558.
[30] C. McManus, W. Churchill, A. Napier, B. Davis, and P. Newman, “Distraction suppression for vision-based pose estimation at city scales,” in Proceedings - IEEE International Conference on Robotics and Automation, 2013, pp. 3762–3769, doi: 10.1109/ICRA.2013.6631106.
[31] V. de Silva, J. Roche, and A. Kondoz, “Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots,” Sensors (Switzerland), vol. 18, no. 8, Aug. 2018, doi: 10.3390/s18082730.
[32] H. G. Norbye, “Camera-Lidar sensor fusion in real time for autonomous surface vehicles,” 2019. Accessed: Jun. 16, 2020. [Online].
[33] F. Zhang, D. Clarke, and A. Knoll, “Vehicle detection based on LiDAR and camera fusion,” in 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014, Nov. 2014, pp. 1620–1625, doi: 10.1109/ITSC.2014.6957925.
[34] C. Wang, M. Ji, J. Wang, W. Wen, T. Li, and Y. Sun, “An improved DBSCAN method for LiDAR data segmentation with automatic Eps estimation,” Sensors (Switzerland), vol. 19, no. 1, Jan. 2019, doi: 10.3390/s19010172.
[35] P. Wei, L. Cagle, T. Reza, J. Ball, and J. Gafford, “LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System,” Electronics, vol. 7, no. 6, p. 84, May 2018, doi: 10.3390/electronics7060084.
[36] “RPLIDAR-A1 360°Laser Range Scanner _ Domestic Laser Range Scanner|SLAMTEC.” https://www.slamtec.com/en/Lidar/A1 (accessed Jun. 17, 2020).
[37] “2.3. Clustering — scikit-learn 0.23.1 documentation.” https://scikit-learn.org/stable/modules/clustering.html (accessed Jul. 06, 2020).
[38] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, Sep. 2013, doi: 10.1177/0278364913491297.
[39] E. Acar, T. G. Kolda, D. M. Dunlavy, and M. Morup, “Scalable Tensor Factorizations for Incomplete Data,” Chemometrics and Intelligent Laboratory Systems, vol. 106, no. 1, pp. 41–56, May 2010, doi: 10.1016/j.chemolab.2010.08.004.
[40] D. Garcia, “Robust smoothing of gridded data in one and higher dimensions with missing values,” Computational Statistics and Data Analysis, vol. 54, no. 4, pp. 1167–1178, Apr. 2010, doi: 10.1016/j.csda.2009.09.020.



QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔