跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.44) 您好!臺灣時間:2026/01/04 04:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:謝宗佑
研究生(外文):HSIEH, ZONG-YOU
論文名稱:基於點雲與2D SURF特徵點的室內場景3D物件辨識系統
論文名稱(外文):3D Object Recognition System of Indoor Scene Based on Point Cloud And 2D SURF Feature Points
指導教授:張厥煒張厥煒引用關係
指導教授(外文):CHANG, CHUEH-WEI
口試委員:奚正寧楊士萱張厥煒
口試日期:2019-06-14
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:88
中文關鍵詞:2D SURF特徵點比對三維物件辨識點雲室內場景ICP
外文關鍵詞:2D SURF Feature Matching3D Object RecognitionPoint CloudIndoor SceneICP
相關次數:
  • 被引用被引用:2
  • 點閱點閱:452
  • 評分評分:
  • 下載下載:89
  • 收藏至我的研究室書目清單書目收藏:0
根據過去本研究團隊對於3D物件點雲辨識提出一套系統,提供智慧機器人對於物品認知能有更深入的理解。而這套系統在辨識過程中,需要和資料庫物件進行一一比對,但由於世上3D物件眾多,隨著要辨識的3D物件數量越來越多時,相對著資料庫大小也會直線上升。因此當待辨識的物體需要和資料庫物件做比對時,如果沒有相似索引搜尋,在辨識過程中必定會花上許多時間,進而影響辨識效能。本論文提出一套2D特徵點搭配3D點雲的物件辨識系統,將欲辨識的3D物件分成32個面向,每個面間隔11.25度,透過SURF(Speeded-Up Robust Features)演算法,抽取物體的特徵點並存入資料庫。另一方面對於對應角度的點雲計算法向量和抽取物件3D關鍵點。由於2D辨識速度優於3D,所以在進行點雲比對前,先透過SURF特徵點比對,獲得在資料庫中和待辨識物件相似度高的物件,再利用3D點雲進行細部確認。這樣一來在比對過程能夠大幅降低需要匹配的物件數目,減少在3D比對階段消耗過多的時間,達到能辨識多物件情況下不失辨識效率及準確率的目的。經過實驗結果,在特定的選取物件中,3D比對過程前平均能排除88%以上不相似物件,在最後物件辨識準確率平均為80%以上。
In our previous work had proposed a system for 3D point cloud object recognition. Providing intelligent robots a deeper understanding ability on object recognition. However, the process of recognition in this system need to compare with object in database. Due to here are many 3D objects in the world, as the number of recognition 3D object increases, the size of object database increases, when the object of pending recognition need to be matched with the database object, if there have no similar index search, it will takes long time on matching and decrease the efficiency of system.
This work proposed the recognition system based on 2D SURF feature point and 3D point cloud. Divide 3D object that need to recognition into thirty-two orientation, each orientation interval of 11.25 degree. Using SURF (Speeded-Up Robust Features) algorithm to extract the keypoint for the image of object and store to the database. In the aspect of the correspondent angle object point cloud, calculate the information including normal, 3D keypoints needed for recognition. Due to 2D image recognition is faster than 3D, before execution 3D point cloud matching, the system first will use 2D SURF feature match with database object and high similarity object of pending test first, than use 3D point cloud to confirm. This process can significantly reduce the number of object, which need to be compared, also reduce the time on 3D matching. It can achieve the purpose of identifying the multi-objects without losing the identification efficiency and accuracy. As the experimental results of special testing object, the average of the 3D match process can exclude more than 88% of the dissimilar objects, and the average object identification accuracy is 80% or more.
摘 要 ii
ABSTRACT iii
誌謝 v
目錄 vi
表目錄 viii
圖目錄 x
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 相關研究與文獻探討 4
2.1 3D物件辨識方法相關研究 4
2.2 3D點雲物件辨識系統相關研究 5
2.3 Point Cloud Library(PCL) 6
2.4 尺度不變特徵 7
2.5 SIFT3D點雲關鍵點 10
第三章 系統架構與流程 12
3.1 系統概述 12
3.2 系統架構 12
3.3 系統流程概述 13
3.3.1 物件特徵資料訓練模組流程概述 14
3.3.2 場景物件辨識模組流程概述 17
第四章 物件特徵資料訓練 19
4.1 2D特徵資訊建置 20
4.1.1 計算物件顏色直方圖 20
4.1.2 SURF特徵學習 22
4.2 3D特徵資訊建置 24
4.2.1 場景物件點雲前處理 24
4.2.2 場景點雲切割物件 25
4.2.3 計算物件點雲外觀長寬差 26
4.2.4 抽取SIFT3D點雲關鍵點 27
4.2.5 SIFT3D關鍵點過濾 29
4.2.6 CSHOT描述子 29
第五章 物件辨識過濾機制 32
5.1 顏色直方圖比較 33
5.2 物件點雲外觀長寬差比較 33
5.3 SURF特徵比對 34
5.3.1 建立資料特徵索引 34
5.3.2 特徵點比對 35
5.4 RANSAC排除錯誤匹配點 36
第六章 場景物件辨識 38
6.1 取得場景物件之點雲辨識資訊 38
6.2 3D特徵匹配與過濾 39
6.2.1 相互匹配 39
6.2.2 RANSAC濾除 39
6.3 位姿估計與物件疊合 40
6.4 相似度驗證 40
第七章 實驗結果 42
7.1 實驗與系統環境 42
7.2 實驗測試物件 43
7.3 實驗結果—場景點雲使用不同法向量估計切割比較 44
7.4 實驗結果—物件點雲X、Y軸極值差 45
7.5 實驗結果—物件外觀差異度閥值設定 47
7.6 實驗結果—顏色直方圖比較閥值設定 49
7.7 實驗結果—2D SURF特徵比對及RANSAC閥值設定 52
7.8 實驗結果—距離對於過濾機制方法的限制 55
7.9 實驗結果—和原辨識系統辨識率及效能比較 63
7.10 實驗結果—失敗案例 79
7.11 實驗結果—對於失敗案例以64個角度進行測試 81
第八章 結論與未來展望 83
8.1 結論 83
8.2 未來展望 84
參考文獻 86
[1] Point cloud. Retrieved from https://en.wikipedia.org/wiki/Point_cloud
[2] 江宗燁,在室內場景中基於點雲的3D物件辨識,碩士論文,國立台北科技大學,台北,2018。
[3] C. Y. Tsai, S. H. Tsai, “Simultaneous 3D Object Recognition and Pose Estimation,” IEEEAccess, 2018, vol.6, pp. 28859-28869.
[4] 郝雯, 王映辉, 寧小娟, 梁璋, 石争浩, 面向點雲的三维物體識別方法综述, 西安理工大學計算機科學與工程學院, Computer Science, vo1.44 No. 9, 2017.
[5] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet, “Deep learning on point setsfor 3d classification and segmentation”. arXiv preprint arXiv:1612.00593, 2016.
[6] Point Cloud Library(PCL). Retrieved from http://pointclouds.org/
[7] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Visio, vol. 60, no.2, 2004, pp. 91-110.
[8] Introducrion to SIFT(Scale-Invariant Feature Transform). Retrieved from http://amroam-roamro.github.io/mexopencv/opencv_contrib/SIFT_detector.html
[9] From SIFT to PointSIFT - Applying SIFT on Point Clouds. Retrieved fromhttps://mediu-m.com/@jianshi_94445/from-sift-to-pointsift-applying-sift onpoint-clouds-e6d5dd031a24
[10] H..Bay, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," Computer vision and image understanding, vol.110, No.3, 2008, pp. 346-359.
[11] Lienhart, R. and Maydt, J., “An Extended Set of Haar-Like Features for Rapid Object Detection,” 2002 International Conference on Image Processing, Vol. 1, Rochester, 22-25 September 2002, I-900-I-903.
[12] H¨ansch, R., Weber, T., Hellwich, O., “Comparison of 3d interest point detectors and descriptors for point cloud fusion,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2(3), 57, 2014.
[13] VoxelGrid. Retrieved from http://pointclouds.org/documentation/tutorials/voxel_grid.php
[14] SIFT Keypoint. Retrieved from http://docs.pointclouds.org/trunk/classpcl_1_1_s_i_f_t_keypoint.html
[15] SHOTColorEstimationOMP. Retrieved from http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html
[16] Dr Darshana Mistry and Asim Banerjee, “Comparison of Feature Detection and Matching Approaches: SIFT and SURF,” GRD Journals- Global Research and Development Journal for Engineering, vol. 2, 2017.
[17] Estimating Surface Normals in a PointCloud. Retrieved from http://pointclouds.org/doc-mentation/tutorials/normal_estimation.php
[18] S. Holzer1,2 and R. B. Rusu2 and M. Dixon2 and S. Gedikli2 and N. Navab1, “Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images,” IEEE/RSJ International Conference on Intelligent Robots and Systems October 7-12, Vilamoura, Algarve, Portuga, 2012.
[19] Cylinder model segmentation. Retrieved from http://pointclouds.org/documentation/tutorials/cylinder_segmentation.php
[20] A. J. Trevor, S. Gedikli, R. B. Rusu and H. I. Christensen, “Efficient organized point cloud segmentation with connected component,” In Semantic Perception Mapping and Exploration (SPME) May 2013.
[21] pcl_features. Retrieved from http://www.pointclouds.org/assets/rss2011/05_features.pdf
[22] K-DimensionalTree. Retrieved from http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree.html
[23] F. Tombari, S. Salti, L. Di Stefano, “A Combined Texture-Shape Descriptor For Enhanced 3D Feature Matching,” In Proceedings of the 18th International Conference on Image Processing (ICIP), Brussels, Belgium, September 11-14 2011.
[24] F. Tombari, S. Salti, L. Di Stefano, “Unique Signatures of Histograms for Local Surface Description,” In Proceedings of the 11th European Conference on Computer Vision (ECCV), Heraklion, Greece, September 5-11 2010.
[25] Kelly H. Zou, PhD, Kemal Tuncali, MD and Stuart G. Silverman, MD “Correlation and Simple Linear Regression1”
[26] Jon Louis Bentley. “Multidimensional Binary Search Trees Used for Associative Searching,” Communications of the ACM 18.9, 1975, pp. 509–517.
[27] Slipa-Anan, C., Hartley, R., “Optimised KD-trees for fast image descriptor matching,” IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2008, pp. 1-8.
[28] Introduction to Fast Library for Approximate Nearest Neighbors. Retrieved from http://www.ishe-nping.com/ArtInfo/384809.html
[29] UniformSampling. Retrieved from http://docs.pointclouds.org/trunk/classpcl_1_1_unifo-rm_sampling.html
[30] Elan Dubrofsky B.Sc., “Homography Estimation,” Carleton University, 2007.
[31] Rusinkiewicz, S., Levoy, M., “Efficient variants of the ICP algorithm,” Proceedings of the 3rd International Conference on 3-D Digital Imaging and Modeling, 2001, pp. 145-152.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top