跳到主要內容

臺灣博碩士論文加值系統

(44.200.194.255) 您好!臺灣時間:2024/07/23 15:02
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:唐義昌
研究生(外文):TANG, YI-CHANG
論文名稱:基於階層式光達與相機融合機制進行即時物件偵測
論文名稱(外文):Real-time Object Detection Based on Hierarchical Lidar and Camera Fusion
指導教授:許志明許志明引用關係
指導教授(外文):HSU, CHIH-MING
口試委員:許志明李明哲周仁祥
口試委員(外文):HSU, CHIH-MINGLEE, MING-CHECHOU, JEN-HSIANG
口試日期:2022-07-29
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:機械工程系機電整合碩士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:71
中文關鍵詞:物件偵測感測器融合深度學習
外文關鍵詞:Object DetectionSensor FusionDeep Learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:121
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要 i
ABSTRACT ii
誌謝 iv
目錄 v
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 前言 1
1.2 研究動機 2
1.3 本論文貢獻 4
1.4 論文架構 4
第二章 文獻探討 6
2.1 純點雲物件偵測 (Point Cloud Object Detection) 6
2.1.1 One-Stage深度學習方法 7
2.1.2 Two-Stage深度學習方法 9
2.2 感測器融合物件偵測(Sensor Fusion Object Detection) 11
2.2.1 數據級融合 12
2.2.2 特徵級融合 13
2.3 總結 15
第三章 階層式融合演算法 16
3.1 階層式融合偵測方法 16
3.2 演算法流程架構 17
3.3 近距離純點雲物件偵測 20
3.3.1 點雲物件偵測與追蹤 20
3.3.2 縮放式數據擴增 24
3.4 中遠距感測器融合物件偵測 26
3.4.1 影像物件偵測 26
3.4.2 點雲降採樣 27
3.4.3 歐式距離法點雲質心 27
3.4.4 點雲質心投影至影像 28
3.4.5 感測器融合偵測 29
3.5 總結 31
第四章 實驗結果 32
4.1 實驗資料集 32
4.1.1 S3 Dataset 33
4.1.2 U5 Dataset 34
4.2 實驗設備 34
4.3 實驗流程 39
4.3.1 Pointpillar訓練及數據擴增 41
4.3.2 相機光達融合 45
4.3.3 階層式融合結果 48
4.4 實驗結果 49
4.4.1 S3高架橋實驗 49
4.4.2 S3夜晚一般道路實驗 56
4.4.3 U5一般道路及隧道實驗 59
4.4.4 U5一般道路路口實驗 63
4.5 總結 65
第五章 結論與未來展望 66
5.1 結論 66
5.2 未來展望 66
參考文獻 67
[1]Cui Y, Chen R, Chu W, Chen L, Tian D, Li Y, Cao Det, “Deep learning for image and point cloud fusion in autonomous driving: A review.” IEEE Transactions on Intelligent Transportation Systems, 2021, 23(2): 722-739.
[2]H. Zhang, D. Yang, E. Yurtsever, K. A. Redmill and Ü. Özgüner, “Faraway-Frustum: Dealing with Lidar Sparsity for 3D Object Detection using Fusion,” 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp. 2646-2652.
[3]Y. Li, T. Chen, M Kabkab, R. Yu, L. Jing, Y. You and H. Zhao,“R4D: Utilizing Reference Objects for Long-Range Distance Estimation.” arXiv preprint arXiv:2206.04831 (2022).
[4]Y. Zhou and O. Tuzel, “VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4490-4499.
[5]Y. Yan, Y. Mao, and B. Li,“SECOND: sparsely embedded convolutional detection.”Sensors, 18(10):3337, 2018.
[6]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom, “PointPillars: Fast Encoders for Object Detection From Point Clouds,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12689-12697.
[7]C. Xu, B. Wu, Z. Wang, Z. Wei, V. Peter, K. Kurt, T. Masayoshi, “Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation.” European Conference on Computer Vision. Springer, Cham, 2020.
[8]L. Fan, X. Xiong, F. Wang, N. Wang and Z. Zhang, “RangeDet: In Defense of Range View for LiDAR-based 3D Object Detection,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2898-2907.
[9]B. Yang, W. Luo and R. Urtasun, “PIXOR: Real-time 3D Object Detection from Point Clouds,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7652-7660.
[10]T. Yin, X. Zhou and P. Krähenbühl, “Center-based 3D Object Detection and Tracking, ” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 11779-11788.
[11]S. Vora, A. H. Lang, B. Helou and O. Beijbom, “PointPainting: Sequential Fusion for 3D Object Detection,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4603-4611.
[12]Z. Yang, Y. Zhou, Z. Chen, and Ngiam, J. (2021). “3d-man: 3d multi-frame attention network for object detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1863-1872.
[13]S. Shi, X. Wang, and H. Li, “Pointrcnn: 3d object proposal generation and detection from point cloud,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp.770–779.
[14]S. Shi, Z. Wang, J. Shi, X. Wang and H. Li, “From Points to Parts: 3D Object Detection From Point Cloud With Part-Aware and Part-Aggregation Network,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 8, pp. 2647-2664.
[15]Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia, “Std: Sparse-to-dense 3d object detector for point cloud,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1951–1960.
[16]S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 529–10 538.
[17]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu and C. Xu, “Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2703-2712.
[18]R. Q. Charles, H. Su, M. Kaichun and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77-85.
[19]Y. E. Bigman and K. Gray, “Life and death decisions of autonomous vehicles,” Nature, vol. 579, no. 7797, pp. E1–E2, Mar. 2020.
[20]F. Duarte, “Self-driving cars: A city perspective,” Sci. Robot.,vol.4, no. 28, pp. 5–6, 2019. [Online]. Available:https://robotics.sciencemag.org/content/4/28/eaav9843
[21]J. Guo, U. Kurup, and M. Shah, “Is it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous driving,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 8, pp. 3135–3151, Aug. 2019.
[22]X. Chen, H. Ma, J. Wan, B. Li and T. Xia, “Multi-view 3D Object Detection Network for Autonomous Driving,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6526-6534.
[23]J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. L. Waslander, “Joint 3D Proposal Generation and Object Detection from View Aggregation,” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1-8.
[24]S. Ren, K. He, R.B. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149.
[25]C. R. Qi, W. Liu, C. Wu, H. Su and L. J. Guibas, “Frustum PointNets for 3D Object Detection from RGB-D Data,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 918-927.
[26]Wang, Zhixin and Kui Jia. “Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal.” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019): 1742-1749.
[27]C. Wang, C. Ma, M. Zhu and X. Yang, “PointAugmenting: Cross-Modal Augmentation for 3D Object Detection,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 11789-11798.
[28]X. Bai, Z. Hu, X. Zhu, Q. Huang, Y. Chen, H. Fu, and C. L. Tai, “Transfusion: Robust lidar-camera fusion for 3d object detection with transformers.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022) , pp. 1090-1099.
[29]Y. Li, A. W. Yu, T. Meng, B. Caine, J. Ngiam, D. Peng, and M. Tan, “Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022), pp. 17182-17191.
[30]ROS Tutorials. Available online: http://wiki.ros.org/ROS/Tutorials
[31]A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite, ” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2012, pp. 3354-3361.
[32]H. Martin, D. Dai, L. Alexander, L. Gool, “Quantifying data augmentation for lidar based 3d object detection.” arXiv preprint arXiv:2004.01643 (2020).
[33]PCL點雲處理庫(Point Cloud Library):https://pointclouds.org/
[34]Point Cloud Euclidean Cluster Extraction:https://pcl.readthedocs.io/en/latest/cluster_extraction.html
[35]ROS camera calibration:http://wiki.ros.org/camera_calibration
[36]X. Weng, J. Wang, D. Held and K. Kitani, “3D Multi-Object Tracking: A Baseline and New Evaluation Metrics,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 10359-10366.
[37]Kalman, R. E. 1960. “A New Approach to Linear Filtering and Prediction Problems,” Transaction of the ASME—Journal of Basic Engineering, pp. 35-45 (March 1960).
[38]Wikipedia-Hungarian Algorithm online: https://zh.wikipedia.org/wiki/匈牙利算法
[39]Bochkovskiy, Alexey, C. -Y Wang, and H. -Y Mark Liao. “Yolov4: Optimal speed and accuracy of object detection.” arXiv preprint arXiv:2004.10934 (2020).
[40]C. -Y. Wang, H. -Y. Mark Liao, Y. -H. Wu, P. -Y. Chen, J. -W. Hsieh and I. -H. Yeh, “CSPNet: A New Backbone that can Enhance Learning Capability of CNN,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 1571-1580.
[41]PCL VoxelGrid Downsample : https://pcl.readthedocs.io/en/latest/voxel_grid.html
[42]annotate 點雲標註工具:https://github.com/Earthwings/annotate
[43]Z. Yang, Y. Sun, S. Liu, and J. Jia, “3dssd: Point-based 3d single stage object detector,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11040–11048.
[44]J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li, “Voxel r-cnn: Towards high performance voxel-based 3d object detection.” In Proceedings of the AAAI Conference on Artificial Intelligence (2021),(Vol. 35, No. 2, pp. 1201-1209).
[45]M. Liang, B. Yang, S. Wang, and R. Urtasun, “Deep continuous fusion for multi-sensor 3d object detection.” In Proceedings of the European conference on computer vision (2018),(ECCV) , pp. 641-656.
[46]D. Xu, D. Anguelov and A. Jain, “PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 244-253.
[47]相機成像原理:https://blog.csdn.net/chentravelling/article/details/53558096
電子全文 電子全文(網際網路公開日期:20270803)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊