(3.238.96.184) 您好!臺灣時間:2021/05/18 16:29
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王傑鴻
研究生(外文):Jie-Hung Wang
論文名稱:基於仿真虛擬空間模型之三維物體姿態估測研究
論文名稱(外文):Simulation-Based 3D Pose Estimation Using a Depth Camera
指導教授:許志明許志明引用關係
指導教授(外文):Chih-Ming Hsu
口試委員:許志明陳金聖連豊力
口試委員(外文):Chih-Ming Hsu
口試日期:2016-07-19
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:製造科技研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:中文
中文關鍵詞:CAD模型姿態估測
外文關鍵詞:CAD modelPose estimation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:92
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
物件的辨識與姿態的估測在自動化裝配與服務型機器人的相關應用上扮演相當重要的角色。現今最熱門的辨識方法,是利用物件CAD Model 與物件掃描之點雲,進行幾何或統計特徵的比對,進而辨識出物件與估測物件的姿態,然而,物件的CAD Model 與實際掃描的物件點雲,由於Kinect感測器的紅外線感測的視差效應,造成物件掃描的點雲資訊存在明顯的失真現象。為了增進這個失真點雲和物件CAD Model 在特徵比對上的效能,我們提出一種Kinect仿真虛擬空間比對法,藉由估測視角、物件距離,以及當下的感知強度,將 CAD Model 模擬成現場Kinect 可能產生的點雲資訊,藉由此方式提高點雲的相似度,因此能精確且快速地進行物件比對與夾取姿態的估測。本研究之實驗結果,和先前已知比對方法做比較,在估測物件姿態的精確度與執行速度效能上,有顯著的提升,這個技術將有助於自動化裝配與服務型機器人領域,具有更即時性與精確性的應用。
Object recognition and posture estimate play an important role in apply of automatic assembly and service robot. Today, the most popular method of recognition is using geometric or feature statistics further identify the object and estimate the posture by comparison of CAD model and point cloud of object by scanning. However, due to the infrared ray’s sensing parallax effect of Kinect sensor, causes significant distortion of point cloud information. In order to improve the efficacy of feature comparison of distortion point cloud data and object’s CAD model. We propose a Kinect virtual simulation space comparison method, can simulate the Kinect scan situation by estimate the sight angle, object distance and the intensity of environment parameters. The point cloud similarity can be increase by this method. So, it can precisely and rapidly achieve object comparison and picking posture estimation. Compare with the previous research, the result of this study reveal that, the accuracy of posture estimation and efficacy of operation power has significant increase. This technique will help automatic assembly and service robot field has more application of immediacy and accuracy.
摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
表目錄 vi
圖目錄 viii
第一章 緒論 1
1.1 前言 1
1.2 研究動機與目的 2
1.3 本論文貢獻 3
1.4 論文架構 3
第二章 文獻回顧 4
2.1 特徵方法 4
2.2 機械視覺設備 6
2.2.1 非接觸式3D掃瞄系統 6
2.2.2 雙雷射三維感測系統 6
2.2.3 深度感測器 8
2.3 小結 9
第三章 物件辨識與姿態估測 10
3.1 點雲資料庫(Point Cloud Library, PCL) 10
3.2 預處理 11
3.2.1 座標轉換 11
3.2.2 區域分割 14
3.2.3 雜訊處理 15
3.2.4 平台分割 16
3.2.5 分群處理 17
3.3 特徵描述 18
3.3.1 視點特徵直方圖(VFH) 18
3.4 Iterative Closest Points 20
3.5 離線資料庫 21
3.5.1 資料庫之建置流程 21
3.5.2 物件俯角 22
3.5.3 虛擬感測 22
3.5.4 Kinect仿真虛擬感測 24
第四章 實驗結果 26
4.1 實驗設備 28
4.2 預處理 29
4.2.1 座標轉換 29
4.2.2 雜訊處理 32
4.2.3 平台分割 32
4.3 建立離線資料庫 34
4.3.1 虛擬感測 34
4.3.2 Kinect仿真虛擬感測 38
4.4 姿態辨識 42
4.4.1 點雲相似度比較 42
4.4.2 平均值與標準差 62
4.5 機械手臂夾取 64
4.5.1 夾取姿態 64
4.5.2 手臂操作原點設置 66
4.5.3 實際操作 67
第五章 結論與未來展望 70
5.1 結論 70
5.2 未來展望 71
參考文獻 72
1.A. Aldoma, Z. C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, and M. Vincze, “Point Cloud Library:Three-Dimensional Object Recognition and 6 DoF Pose Estimation,” IEEE Robotics and Automation Magazine, pp.80-91, 2012.
2.Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan, “3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 11, pp. 2270-2287, Nov 2014.
3.N. Bayramoglu, and A. Alatan, “Shape index SIFT: Range image recognition using local features,” International Conference on Pattern Recognition (ICPR), 2010, pp. 352–355
4.U. Castellani, M. Cristani, S. Fantoni, and V. Murino, “Sparse points matching by combining 3D mesh saliency with statistical descriptors,” Computer Graphics Forum, vol. 27, no. 2, pp. 643–652, 2008.
5.A. Aldoma, F. Tombari, L. Di Stefano, and M. Vincze, “A global hypotheses verification method for 3D object recognition,” in European Conference on Computer Vision, 2012, pp. 511–524.
6.R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, “Fast 3D recognition and pose using the viewpoint feature histogram,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),2010, pp. 2155-2162
7.A. Mian, M. Bennamoun, and R. Owens, “Three-dimensional model-based object recognition and segmentation in cluttered scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1584–1601, Oct. 2006.
8.A. Frome, D. Huber, R. Kolluri, T. B€ulow, and J. Malik, “Recognizing objects in range data using regional point descriptors,” in European Conference on Computer Vision, 2004, pp. 224–237.
9.A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 433–449, May 1999.
10.F. Mokhtarian, N. Khalili, and P. Yuen, “Multi-scale free-form 3D object recognition using 3D models,” Image and Vision Computing, vol. 19, no. 5, pp. 271–281, 2001.

11.S. M. Yamany and A. A. Farag, “Surface signatures: An orientation independent free-form surface representation scheme for the purpose of objects registration and matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 8, pp. 1105–1120, Aug. 2002.
12.E. Akagündüz and İ. Ulusoy, “3D object representation using transform and scale invariant 3D features,” IEEE International Conference on Computer Vision, 2007, pp. 1–8.
13.R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Aligning point cloud views using persistent feature histograms,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2008, pp. 3384–3391.
14.R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” IEEE International Conference on Robotics and Automation (ICRA), 2009, pp. 3212–3217.
15.A. Aldoma, M. Vincze, W. Garage, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu and G. Bradski, “CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues,” IEEE International Conference on Computer Vision Workshops, Nov. 2011, pp. 585 -592
16.C. H. Wu, S. Y. Jiang and K. T. Song, "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera," International Conference on In Control Automation and Systems (ICCAS), 2015, pp. 1645-1649
17.ATOS Compact Scan specification is available at
http://www.capture3d.com/3d-metrology-solutions/3d-scanners/atos-compact-scan
18.Wei-Yao Chiu, “Dual laser 3D scanner for random bin picking system,” International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, May 2015, pp.1-3
19.Structure of Kinect is available at
https://kheresy.wordpress.com/2014/12/29/kinect-for-windows-sdk-v2-basic/
20.PointCloud Library(PCL) is available at http://pointclouds.org/
21.R. B. Rusu and S. Cousins, “3d is here: Point cloud library (PCL),” IEEE International Conference on Robotics and Automation (ICRA), May 2011, pp. 1-4
22.Rotation matrix is available at
https://en.wikipedia.org/wiki/Rotation_matrix
23.Kinect specification is available at
http://zugara.com/how-does-the-kinect-2-compare-to-the-kinect-1

24.MatLab pcdenoise is available at
http://www.mathworks.com/help/vision/ref/pcdenoise.html
25.R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3D Point Cloud Based Object Maps for Household Environments,” Journal on Robotics and Autonomous Systems, pp. 927-941, 2008.
26.R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz, “Towards 3D object maps for autonomous household robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), 2007, pp. 3191-3198
27.MatLab pcfitplane is available at
http://www.mathworks.com/help/vision/ref/pcfitplane.html
28.P. H. Torr and A. Zisserman, “MLESAC: A new robust estimator with application to estimating image geometry,” Computer Vision and Image Understanding, pp. 138-156, 2000.
29.PCL Euclidean clustering is available at
http://www.pointclouds.org/documentation/tutorials/cluster_extraction.php
30.R. B. Rusu, Z. C. Marton, N. Blodow and M. Beetz, “Persistent point feature histograms for 3D point clouds,” International Conference on Autonomous System, 2008, pp. 119-128
31.P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” In Robotics-DL tentative, pp. 586-606, 1992.
32.Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” International journal of computer vision, vol. 13, pp. 119-152, 1994.
33.S. Rusinkiewicz and M. Levoy, “Efficient Variants of the ICP Algorithm,” Proceedings of the International Conference on 3-D Digital Imaging and Modeling (3DIM), 2001, pp. 145–152
34.Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” International Journal of Image and Vision Computing, pp. 145–155, 1992.
35.Low, K.L., “Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration,” University of North Carolina, Feb., 2004.
36.M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” VISAPP, 2009.
37.PCL Kd-Tree is available at
http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree.html
38.P. Terdiman, “Memory-optimized bounding-volume hierarchies,” 2001.
39.Optimized Collision Detection(OPCODE) is available at
http://www.codercorner.com/Opcode.htm
40.M. J. Landau, B. Y. Choo, and Peter A. Beling “Simulating Kinect Infrared and Depth Images,” IEEE Transactions on Cybernetics, vol. PP, no. 99, pp. 1-14, 2015.
41.PCL Kd-Tree_search is available at
http://pointclouds.org/documentation/tutorials/kdtree_search.php
電子全文 電子全文(網際網路公開日期:20210818)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top