跳到主要內容

臺灣博碩士論文加值系統

(44.222.64.76) 您好!臺灣時間:2024/06/15 06:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:商少瑜
研究生(外文):SHANG, SHAO-YU
論文名稱:基於機器學習二維影像辨識與三維物件姿態估測之眼在手機械手臂夾取系統
論文名稱(外文):Eye-in-hand Robotic Arm Gripping System Based on Two Dimensional Object Recognition Using Machine Learning and Three Dimensional Object Posture Estimation.
指導教授:陳金聖陳金聖引用關係
指導教授(外文):CHEN, CHIN-SHENG
口試委員:陳金聖林志哲王銀添蔣欣翰
口試委員(外文):CHEN, CHIN-SHENGLIN, CHIH-JERWANG, YIN-TIENJIANG, HSIN-HAN
口試日期:2021-07-29
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:自動化科技研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:62
中文關鍵詞:點雲物件姿態估測方向性邊界盒自動夾持系統機械手臂座標轉換
外文關鍵詞:point cloudobject posture estimateoriented bounding boxautomatic grippingrobotic armcoordinate transformation
相關次數:
  • 被引用被引用:1
  • 點閱點閱:332
  • 評分評分:
  • 下載下載:16
  • 收藏至我的研究室書目清單書目收藏:1
本研究主要利用眼在手多視角取像系統,以多個視角擷取物件影像並建立物件點雲,將不同視角取得之物件點雲拼接起來並進行姿態估測,藉此取得物件在空間中的位置與完整姿態,最後規劃二指機械夾爪夾取辦法並應用於機械手臂夾取系統中。其中,物件的三維點雲資訊是利用Intel® RealSense D435i RGB-D相機拍攝場景中物件而得,以YOLOv3深度網路對二維RGB影像進行物件辨識後,根據物件辨識邊界盒可從場景點雲中切割出物件點雲。將多個視角取得的物件點雲逐一拼接成近乎完整的物件點雲後,接著利用表面法向量估計與快速點特徵直方圖進行物件的特徵描述與統計,並以點雲粗匹配(取樣一致性之初始匹配)及精匹配(迭代最近點)方法與事先建立之物件點雲樣本進行匹配,計算出完整物件之姿態估測。在物件夾取規劃的部分利用方向性邊界盒進行規劃,透過方向性邊界盒可以更精確地描述物件在空間中之姿態,並且精準地規劃機械夾爪夾持物件之角度與夾持位置。最後本文將整合前述的物件辨識、姿態估測以及夾取規劃方法並實現於KUKA LBR iiwa 7 R800機械手臂,並透過實驗測試系統之夾取性能與穩定性。
This research proposes using the eye-in-hand multi-view imaging system to capture object images from multiple perspectives and create an object point cloud. After that, splicing all the object point clouds from different perspectives and perform posture estimation to obtain the position and posture of the object in space. Finally, producing the two-finger mechanical gripper gripping method and apply it to the gripping system of the robotic arm. The three-dimensional information of the object in the scene uses the Intel® RealSense Depth Camera D435i to capture. Furthermore, the YOLOv3, a depth neural network-based object recognition algorithm, identifies the objects in a two-dimensional RGB image. Finally, segment the object point cloud according to the object’s recognition bounding box to obtain the object point cloud. After segmenting the object point cloud, obtained from synthesizing the multiple perspectives point cloud of an object, the normal vector of the surface estimated and the Fast Point Feature Histogram (FPFH) is applied for calculating feature description and statistics. A two-staged registration strategy, the point cloud rough matching (Sample Consensus Initial Alignment, SAC-IA) and fine matching (Iteration Closest Point, ICP) methods, is used to estimate the posture of the object with the pre-established object point cloud samples. Furthermore, this research proposed using Oriented Bounding Box (OBB) to estimate the object’s pose. After that, the grasping posture of the mechanical gripper, finger’s position of the gripper, can be generated more accurately. Finally, this research integrates the aforementioned object identification, posture estimation, and gripping planning methods into the KUKA LBR iiwa 7 R800 robotic arm. The experimental results of this thesis have verified the gripping performance and stability of the system.
摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 vii
圖目錄 ix
第一章 緒論 1
1.1 研究背景與目的 1
1.2 文獻回顧 1
1.3 研究方法 3
1.4 論文架構 4
第二章 系統架構與描述 5
2.1 硬體架構 5
2.1.1 三維資訊取像模組 5
2.1.2 建立樣本之三維取像模組 6
2.1.3 機械手臂與夾爪 6
2.2 軟體架構 8
2.2.1 點雲(Point Cloud) 8
2.2.2 機械手臂人機介面 9
2.3 整體系統流程 9
第三章 物件姿態估測 11
3.1 物件點雲匹配 12
3.1.1 建立點雲樣本 12
3.1.2 取得物件點雲 12
3.1.3 點雲匹配 15
3.2 物件方向性邊界盒(OBB) 22
第四章 物件夾取規劃 26
4.1 夾取規劃 26
4.1.1 選擇夾持軸 26
4.1.2 機械夾爪夾點規劃 27
4.1.3 夾取規劃驗證 29
4.2 座標系統整合 30
4.2.1 齊次轉換矩陣 30
4.2.2 座標系統轉換 31
4.2.3 工具座標系與OBB座標系之轉換 32
第五章 實驗結果 36
5.1 實驗硬體架構 38
5.2 物件樣本建立 39
5.3 座標系統整合 40
5.4 點雲匹配結果 44
5.5 物件夾取測試結果 48
5.6 單一視角拍攝與多視角拍攝之比較 54
第六章 結論與未來展望 58
6.1 結論 58
6.2 未來展望 59
參考文獻 60


[1] R. G. Dorsch, G. Häusler, and J. M. Herrmann, "Laser triangulation:fundamental uncertainty in distance measurement," Applied Optics, vol. 33, no.7, 1994, pp. 1306-1314.
[2] 廖至欣,「數位結構光三維輪廓量測之誤差校正技術」,中國機械工程學會的二十一屆全國學術研討會論文集,高雄,2004。
[3] R. B. Rusu and S. Cousins, "3d is here:Point cloud library (pcl)," in Robotics and automation(ICRA), 2011 IEEE International Conference on, 2011, pp. 1-4.
[4] M. Alexa and A. Adamson, "On normals and projection operators for surfaces defined by point sets," in Proceedings of the First Eurographics conference on, 2004, pp. 149-155.
[5] J. Huang and S. You, "Point cloud matching based on 3D self-similarity," in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, Providence, RI, 2012, pp. 41-48.
[6] J. Jiang, J. Cheng, and X. Chen, "Registration for 3-D point cloud using angular-invariant feature," Neurocomputing, vol. 72, 2009, pp. 3839-3844.
[7] E. Wahl, U. Hillenbrand, and G. Hirzinger, "Surflet-pair-relation histograms: a statistical 3D-shape representation for rapid classification," in 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings. Fourth International Conference on, Banff, Alta, 2003, pp. 474-481.
[8] R. B. Rusu, N. Blodow, and M. Beetz, "Fast point feature histograms (FPFH) for 3D registration," in Robotics and Automation, 2009. ICRA'09. IEEE International Conference on, Kobe, 2009, pp. 3212-3217.
[9] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, "Fast 3d recognition and pose using the viewpoint feature histogram," in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, Taipei, 2010, pp. 2155-2162.
[10] S. Rusinkiewicz and M. Levoy, "Efficient variants of the ICP algorithm," in 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, Quebec City, Que, 2001, pp. 145-152.
[11] P. J. Besl and N. D. McKay, "Method for registration of 3-D shapes," in Robotics-DL tentative, 1992, pp. 586-606.
[12] 蔡銘富,植基於立體視覺之三維物件辨識與追蹤,碩士論文,國立臺北科技大學自動化科技研究所,臺北,2009。
[13] Gang Mei, “RealModel-a system for modeling and visualizing sedimentary rocks.” Doctoral thesis, China University of Geosciences, Beijin, China, 2014.
[14] Nguyen Thanh Hung, “3-D Object Recognition and Localization of Randomly Stacked Objects for Automation.” Doctoral thesis, National Taipei University of Technology Electromechanical Technology, Taipei, Taiwan, 2015.
[15] 盧冠妤,結合點雲影像辨識之機器手臂夾取位置與姿態之設計,碩士論文,國立臺灣大學機械工程研究所,臺北,2018。
[16] L. Cardenas, “Semi-Structured 2.5 Dimensional Object Classification with Deep Learning for Robotic Grasping Systems.” Master thesis, National Taipei University of Technology Graduate Institute of Automation Technology, Taipei, Taiwan, 2018.
[17] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788.
[18] S. Gottschalk, M. C. Lin and D Manocha, “OBBTree: A hierarchical structure for rapid interface detection.” Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, Louisiana, pp 171-180, 1996.
[19] https://www.mouser.com/datasheet/2/612/Intel_RealSense_D400_Family_Datasheet_Jan2019-1571271.pdf
[20] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz and D. Quillen(2017) “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection.” The International Journal of Robotics Research, 027836491771031–. doi:10.1177/0278364917710318
[21] Qingkai Lu, Mark Van der Merwe, Balakumar Sundaralingam, Tucker Hermans(2020), “ Multi-Fingered Grasp Planning via Inference in Deep Neural Networks.” in Robotics and Automation, IEEE Robotics & Automation Magazine, pp55-65 doi:10.1109/mra.2020.2976322

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊