跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.44) 您好!臺灣時間:2026/01/03 13:35
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳妤如
研究生(外文):Yu-Ru Chen
論文名稱:混合2D和3D視覺之機器手臂全自主物件夾取系統
論文名稱(外文):Robot Arm Autonomous Object Grasping System Based on 2D and 3D Vision Techniques
指導教授:林其禹林其禹引用關係
指導教授(外文):Chyi-Yeu Lin
口試委員:林遠球林柏廷林其禹
口試委員(外文):Yuan-Chiu LinPo-Ting LinChyi-Yeu Lin
口試日期:2019-07-29
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:機械工程系
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:86
中文關鍵詞:全自主物件夾取影像處理2D物件辨識Point Pair FeaturesImage Based Visual ServoingPerspective-n-Point
外文關鍵詞:Autonomous Object GraspingImage processing2D Object RecognitionPoint Pair FeaturesImage Based Visual ServoingPerspective-n-Point
相關次數:
  • 被引用被引用:0
  • 點閱點閱:1284
  • 評分評分:
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:1
本研究開發能指揮機器手臂進行全自主物件夾取的2D和3D整合視覺系統,並實際結合六軸串聯式機械手臂進行物件夾取。使用到的技術包含使用深度學習訓練的2D物件辨識、Point Pair Features (PPF)、Image Based Visual Servoing (IBVS)與Perspective-n-Point (PnP)。針對家庭內多樣化的物件種類與場景,3D物件姿態估計是系統的核心。PPF是很有效的物件6D姿態估測技術,但執行PPF需不斷採樣導致龐大的計算量,且匹配結果可能有誤,因此本研究先使用深度學習的物件辨識技術,利用RGB資訊執行深度學習找到物件的2D pixel位置後,再將此2D位置轉換成RGB-D相機中的3D座標,接著僅保留物件大致位置區域的點雲去進行匹配,此舉能夠將不需要的點雲去除,節省大量的採樣過程以節省大量的匹配時間,並且提高PPF匹配的召回率。等到匹配完成後便可引導機械手臂至匹配物件事先設定好的樣板位置,為了克服多種誤差,本系統接續利用物件上的人工標記執行IBVS或PnP讓機器手臂夾爪移動到更精準的物件抓取位置。相對於可執行對移動間物件執行抓取的IBVS,但收斂速度緩慢的特性,使用PnP技術可快速移動機器手臂夾爪到靜止物件的精準抓取位置。本研究也包括執行幾個物件抓取實驗,證實本研發系統的實用性和時間效益。
This research develops a 2D and 3D integrated vision system that can command the robotic arm to fully autonomous object grasping, combines the six-axis robot arm for object grasping. The techniques used include 2D object recognition using deep learning, Point Pair Features (PPF), Image Based Visual Servoing (IBVS), and Perspective-n-Point (PnP). For the variety of objects and scenes in the family, 3D object pose estimation is the core of the system. PPF is a very effective object 6D pose estimation technology, but the implementation of PPF requires constant sampling, which leads to a huge amount of calculation, and the matching result may be wrong. Therefore, this study uses deep learning object recognition technology to identify objects and finding the 2D pixel position of the object using RGB information. After that, the 2D position is converted into a 3D coordinate in the RGB-D camera, and then only save the point cloud of the approximate position area of the object for matching. It can remove the unnecessary point cloud and save a large number of sampling processes, also save a lot of matching time and increase the recall rate of PPF matching. After the matching is completed, the robot arm can be guided to the position of the template set by the matching object. In order to overcome various errors, the system uses the artificial mark on the object to perform IBVS or PnP. It can move the robot arm to a more accurate object grasping position. Relative to the IBVS that can perform grasping of moving objects, but with slow convergence, PnP technology can quickly move the robot arm to the precise grasping position of the stationary object. This study also included the implementation of several object grasping experiments to confirm the practicality and time effectiveness of this development system.
摘要
Abstract
誌謝
目錄
圖目錄
表目錄
第一章 緒論
1-1 前言
1-2 研究動機與研究目的
1-3 文獻回顧
1-4 本文架構
第二章 基礎理論
2-1 相機系統
2-1-1 相機成像原理
2-1-2 內部參數(Intrinsic Parameters)
2-1-3 外部參數(Extrinsic Parameters)
2-1-4 形變參數(Distortion Coefficients)
2-1-5 深度相機
2-2 深度學習之2D物件辨識
2-3 3D物件姿態預估
2-4 視覺伺服(Visual Servoing)
2-5 PnP問題 (Perspective-n-Point)
2-6 影像特徵點偵測
2-7 機械手臂運動學
第三章 全自主物件抓取系統
3-1 系統架構與系統流程
3-2 3D物件姿態估計結合2D影像物件辨識
3-3 最終誤差修正
3-3-1 人工標記點偵測
3-3-2 Image Based Visual Servoing
3-3-3 Perspective-n-Point
3-4 系統限制
第四章 實驗器材與環境設置
4-1 六軸機械手臂
4-2 3D模型掃描器
4-3 深度相機
4-4 2D相機
4-5 夾取物件
4-6 夾爪
4-7 電腦規格
4-8 環境設置
第五章 實驗結果
5-1 3D物件姿態估計結合2D影像物件辨識之系統速度提升
5-2 視覺伺服之收斂速度提升
5-3 使用PnP之系統誤差分析
5-4 全自主物件抓取系統演示結果
第六章 結論與未來展望
6-1 結論
6-2 未來展望
參考文獻
1. Zeng, Andy, et al. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching." 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.
2. Asfour, Tamim, et al. "ARMAR-6: A collaborative humanoid robot for industrial environments." 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids). IEEE, 2018.
3. Bohren, Jonathan, et al. "Towards autonomous robotic butlers: Lessons learned with the PR2." 2011 IEEE International Conference on Robotics and Automation. IEEE, 2011.
4. Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
5. Besl, Paul J., and Ramesh C. Jain. "Three-dimensional object recognition." ACM Computing Surveys (CSUR) 17.1 (1985): 75-145.
6. Drost, Bertram, et al. "Model globally, match locally: Efficient and robust 3D object recognition." 2010 IEEE computer society conference on computer vision and pattern recognition. Ieee, 2010.
7. Kiforenko, Lilita, et al. "A performance evaluation of point pair features." Computer Vision and Image Understanding 166 (2018): 66-80.
8. Vidal, Joel, Chyi-Yeu Lin, and Robert Martí. "6D pose estimation using an improved method based on point pair features." 2018 4th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2018.
9. Kehl, Wadim, et al. "Deep learning of local RGB-D patches for 3D object detection and 6D pose estimation." European Conference on Computer Vision. Springer, Cham, 2016.
10. Zhou, Yin, and Oncel Tuzel. "Voxelnet: End-to-end learning for point cloud based 3d object detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
11. Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
12. Corke, Peter. Robotics, vision and control: fundamental algorithms In MATLAB® second, completely revised. Vol. 118. Springer, 2017.
13. Fischler, Martin A., and Robert C. Bolles. "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography." Communications of the ACM 24.6 (1981): 381-395.
14. Gao, Xiao-Shan, et al. "Complete solution classification for the perspective-three-point problem." IEEE transactions on pattern analysis and machine intelligence 25.8 (2003): 930-943.
15. Horaud, Radu, et al. "An analytic solution for the perspective 4-point problem." Proceedings CVPR'89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 1989.
16. Zhi, Lihong, and Jianliang Tang. "A complete linear 4-point algorithm for camera pose determination." AMSS, Academia Sinica 21 (2002): 239-249.
17. Quan, Long, and Zhongdan Lan. "Linear n-point camera pose determination." IEEE Transactions on pattern analysis and machine intelligence 21.8 (1999): 774-780.
18. 機器人學,取自https://zh-tw.coursera.org/lecture/robotics1/3-3-dhbiao-da-fa-3LjyX
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top