跳到主要內容

臺灣博碩士論文加值系統

(44.192.79.149) 您好!臺灣時間:2023/06/10 03:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:李世豪
研究生(外文):LEE, SHIH-HAO
論文名稱:基於快速建立RGB-D影像之機械手臂環形物件堆疊夾取系統
論文名稱(外文):Annulus Object Robotic Arm Random Bin Picking System Based on Rapid Establishment of RGB-D Images
指導教授:蕭俊祥蕭俊祥引用關係
指導教授(外文):SHAW, JIN-SIANG
口試委員:蕭俊祥李春穎李福星
口試委員(外文):SHAW, JIN-SIANGLEE, CHUN-YINGLEE, FU-SHIN
口試日期:2022-07-21
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:製造科技研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:111
語文別:中文
論文頁數:68
中文關鍵詞:機械手臂堆疊夾取深度學習網路實例切割
外文關鍵詞:Robot ManipulatorRandom bin pickingDeep LearningInstance segmentation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:108
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 相關文獻回顧 2
1.3 研究方法 5
1.4 本論文之架構 6
第二章 系統架構 7
2.1 實驗設備 7
2.1.1 機械手臂 7
2.1.2 電動夾爪 8
2.1.3 RGB-D 相機 10
2.2 電腦與軟體架構 11
2.2.1 Python環境開發-Visual Studio 11
2.2.2 Java環境開發-KUKA Sunrise Workbench 12
2.2.2.1設置I/O KUKA Workvisual 13
2.3 通訊架構 14
2.3.1機械手臂與PC Socket通訊 15
第三章 座標系統整合 16
3.1 相機影像校正 16
3.1.1 RGB-D影像匹配 17
3.2 機械手臂與相機之手眼校正 19
3.3 機械手臂運動學 20
3.3.1 D-H表示法 21
3.3.2順向及逆向運動學 22
第四章 實驗研究方法 25
4.1 數據收集 25
4.1.1 圖像數據增強 26
4.1.2 圖像標註 27
4.2 基於深度學習物件識別 29
4.2.1 Mask R-CNN網路 29
4.2.2 訓練模型架構 31
4.2.3 評估指標 33
4.3 夾取評估 37
4.3.1 夾取點評估 38
4.3.2 物件姿態評估 39
第五章 實驗與討論 41
5.1 實驗環境 41
5.2 實驗流程 42
5.2.1 物件識別流程 42
5.2.2 物件夾取流程 46
5.3 實驗結果 51
5.3.1 物件無堆疊狀況夾取 51
5.3.2 物件堆疊狀況夾取 53
5.4 實驗結果討論與改善 54
第六章 結論與未來展望 63
6.1 結論 63
6.2 未來展望 64
參考文獻 65


[1]International Federation of Robotics, "IFR presents World Robotics 2021 reports"
[2]圖片來源: https://www.assemblymag.com/articles/94549-random-bin-picking-comes-of-age, June,2022
[3]圖片來源: MWES Random Bin Picking : https://www.mwes.com/random-bin-picking/, June,2022
[4]圖片來源: SOLOMON機器人隨機取放系統 : https://www.solomon.com.tw/produc/智慧型隨機取放系統/, June,2022
[5]圖片來源: Photoneo:https://www.photoneo.com/bin-picking/
[6]圖片來源: FANUC:https://www.fanucamerica.com/solutions/applications/picking-and-packing-robots, June,2022
[7]C. Wu, S. Jiang and K. Song, "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera," 2015 15th International Conference on Control, Automation and Systems (ICCAS), 2015, pp. 1645-1649.
[8]H.Y. Kuo, H.R. Su, S.H. Lai, and C.C. Wu, “3D Object Detection and Pose Estimation from Depth Image for Robotic Bin Picking,” Proc. of IEEE Int'l Conf. on Automation Science and Engineering,pp.1264-1269, 2014.
[9]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. of the IEEE, pp.2278-2324, 1998.
[10]圖片來源: https://papers.readthedocs.io/en/latest/imagedetection/rcnn/, June,2022
[11]L. Pinto and A. Gupta, "Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours," 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 3406-3413.
[12]H. Wang, H. Situ and C. Zhuang, "6D Pose Estimation for Bin-Picking based on Improved Mask R-CNN and DenseFusion," 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ), 2021, pp. 1-7.
[13]KUKA Roboter GmbH, “Operating and Programming Instructions for System Integrators”, 2016, pp. 22-23.
[14]圖片來源: http://www.toyorobot.com/Product, June,2022
[15]圖片來源: https://kheresy.wordpress.com/2014/12/29/kinect-for-windows-sdk-v2-basic/, June,2022
[16]P. Fankhauser, M. Bloesch, D. Rodriguez, R. Kaestner, M. Hutter and R. Siegwart, "Kinect v2 for mobile robot navigation: Evaluation and modeling," 2015 International Conference on Advanced Robotics (ICAR), 2015, pp. 388-394.
[17]KUKA Roboter GmbH, “WorkVisual”, 2016, pp. 11-12.
[18]圖片來源: https://www.vision-doctor.com/en/optical-errors/distortion.html, June,2022
[19]Jiao J, Yuan L, Tang W, Deng Z, Wu Q. A Post-Rectification Approach of Depth Images of Kinect v2 for 3D Reconstruction of Indoor Scenes. ISPRS International Journal of Geo-Information. 2017; 6(11):349.
[20]Zhengyou Zhang, "Flexible camera calibration by viewing a plane from unknown orientations," Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, pp. 666-673
[21]R. Y. Tsai and R. K. Lenz, "A new technique for fully autonomous and efficient 3D robotics hand/eye calibration," in IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345-358
[22]F. C. Park and B. J. Martin, "Robot sensor calibration: solving AX=XB on the Euclidean group," in IEEE Transactions on Robotics and Automation, vol. 10, no. 5, pp. 717-721, Oct. 1994
[23]Radu Horaud, Fadi Dornaika. Hand-eye Calibration. The International Journal of Robotics Research,SAGE Publications, 1995, 14 (3), pp.195–210
[24]N. Andreff, R. Horaud and B. Espiau,"On-line hand-eye calibration,"Second International Conference on 3-D Digital Imaging and Modeling(Cat.No.PR00062),1999,pp.430-436
[25]K. H. Strobl and G. Hirzinger, "Optimal Hand-Eye Calibration," 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 4647-4653,
[26]R. S. Hartenberg and J. Denavit, "A kinematic notation for lower pair mechanisms basedon matrices," Journal of Applied Mechanics, vol. 77, June, 1955, pp. 215-221.
[27]Mark W. Spong, Seth Hutchinson and M. Vidyasagar, "Robot Modeling and Control," First Edition, 2006.
[28]Sebastian Doliwa. (2020). Inverse kinematics of the KUKA LBR iiwa R800 (7 DOF). Zenodo. https://doi.org/10.5281/zenodo.4063575
[29]圖片來源: http://www.yi-fon.com/pd47560086.html, June,2022
[30]M. Kuo, H. -T. Chan and C. -H. Hsia, "Study on Mask R-CNN with Data Augmentation for Retail Product Detection," 2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 2021, pp. 1-2
[31]圖片來源: https://www.wpgdadatong.com/tw/blog/detail?BID=B1319,July,2022
[32]R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587
[33]王仁蔚(2019)。以實例切割與表徵學習應用於機械臂夾取堆疊物件。國立臺灣大學機械工程學研究所碩士論文,台北市。
[34]K. He, G. Gkioxari, P. Dollár and R. Girshick, "Mask R-CNN," 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988
[35]https://doi.org/10.48550/arXiv.1411.4038
[36]https://cocodataset.org/#home
[37]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[38]C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12-June, pp. 1–9
[39]圖片來源: https://chih-sheng-huang821.medium.com/深度學習系列-什麼是ap-map-aaf089920848,July,2022
[40]@inproceedings{morrison2018closing,title={{Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach}},author={Morrison, Douglas and Corke, Peter and Leitner, J\"urgen},booktitle={Proc.\ of Robotics: Science and Systems (RSS)},year={2018}}
[41]圖片來源: https://www.cnblogs.com/xwgli/p/7045562.html,July,2022

電子全文 電子全文(網際網路公開日期:20240831)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊