跳到主要內容

臺灣博碩士論文加值系統

(44.220.181.180) 您好!臺灣時間:2024/09/09 18:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:蔡勝峰
研究生(外文):TSAI, SHENG-FENG
論文名稱:結合RGB-D感測器與深度學習方法於機械手臂虛實整合系統之研究
論文名稱(外文):Application of RGB-D Sensors and Deep Learning Methods to a Robotic Cyber-Physical Integration System
指導教授:陳秋宏陳秋宏引用關係
指導教授(外文):CHEN, CHIU-HUNG
口試委員:嚴礽麒陳鏡崑
口試委員(外文):YAN, JENG-CHICHEN, CHING-KUN
口試日期:2024-07-16
學位類別:碩士
校院名稱:逢甲大學
系所名稱:機械與電腦輔助工程學系
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:62
中文關鍵詞:機械手臂影像成像視覺處理深度學習虛實整合
外文關鍵詞:robotimage acquisitionimage processingdeep learningCyber-Physical System(CPS)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:7
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著人工智慧和機器人技術的快速發展,深度學習和虛實整合技術在提升機器人自主操作和智能控制方面顯示出巨大的潛力。特別是在手臂運動控制領域,這些技術的應用不僅可以提高機器人手臂的精度和靈活性,還可以在虛擬環境中進行模擬和訓練,以減少實驗成本和風險。然而,目前的研究面臨著幾個挑戰,包括深度學習模型的解釋性不足、訓練過程中的高數據需求以及虛實整合系統的同步與協作問題。
本論文專注於利用深度學習方法及RGB-D技術來辨識刀具的實體位置,並將其與Roboguide中的虛擬設定進行比對,以追蹤虛實間所產生的誤差。在實驗中,我們採用了影像處理和深度學習技術來處理RGB-D圖像,從中提取刀具的準確位置資訊。這些數據被用來與Roboguide模擬環境中的預設位置進行比較,從而分析在不同條件下虛實之間的定位誤差。我們的研究結果顯示,通過精確的圖像處理和深度學習模型,可以顯著提高刀具位置辨識的準確性,並有效地跟蹤虛實系統之間的誤差。這項研究對於提升工業機械手臂的精度和可靠性具有重要意義,並為未來的智能製造系統提供了有力支持。
With the rapid development of artificial intelligence and robotics, deep learning and virtual integration technologies show great potential in enhancing autonomous operation and intelligent control of robots. Especially in the field of arm motion control, the application of these techniques not only improves the accuracy and flexibility of robot arms, but also allows simulation and training in virtual environments to reduce the experimental cost and risk. However, current research faces several challenges, including the lack of interpretability of deep learning models, the high data requirements during training, and the synchronisation and collaboration problems of virtual-integrated systems.
In this thesis, a deep learning method and RGB-D technique are used to identify the physical position of the tool and compare it with the virtual setup in Roboguide in order to track the error generated between the virtual and the real. In our experiments, we used image processing and deep learning techniques to process the RGB-D images to extract the exact position information of the tool. These data are compared with the default positions in the Roboguide simulation environment to analyse the positioning errors between the virtual and the real under different conditions. Our results show that the accuracy of tool position recognition can be significantly improved and the error between the virtual and real systems can be effectively tracked through accurate image processing and deep learning models. This research is of great significance for improving the accuracy and reliability of industrial robotic arms and provides strong support for future intelligent manufacturing systems.
致謝 i
摘要 ii
Abstract iii
目錄 iv
圖目錄 vii
表目錄 viii
第一章 緒論 1
1.1 前言 1
1.2 研究動機與目的 2
1.3 論文架構 3
第二章 文獻回顧 4
2.1 影像處理 4
2.1.1 形態學 4
2.1.2 侵蝕運算 4
2.1.3 膨脹運算 4
2.1.4 關閉運算 4
2.1.5 開啟運算 4
2.2 相機座標系統 7
2.2.1 張氏標定法 7
2.2.2 世界坐標系 8
2.2.3 相機坐標系 8
2.2.4 圖像坐標系 8
2.2.5 像素坐標系 8
2.2.6 座標系轉換 9
2.3 RGB-D相機成像原理 10
2.3.1 飛時測距(TIME OF FLIGHT,TOF) 10
2.3.2 結構光 11
2.4 色域空間轉換 15
2.4.1 RGB色彩空間 15
2.4.2 HSV色彩空間 16
2.4.3 YCBCR色彩空間 17
2.5 深度學習 18
2.5.1 基於RGB-D多圖片融合 18
2.5.2 卷積神經網絡(CNN) 19
2.5.3 深度神經網絡(DNN) 20
2.5.4 卷積層 20
2.5.5 池化層 21
2.5.6 全連接層 22
2.6 JL-DCF 23
2.6.1 聯合學習(JL) 23
2.6.2 密集協作融合(DCF) 24
2.7 虛實整合 26
第三章 研究設備與流程 30
3.1 研究方法 30
3.2 虛擬場域架設 32
3.3 研究設備 33
3.3.1 機械手臂介紹 33
3.3.2 刀具設計 37
3.3.3 攝像頭介紹 39
3.4 圖像預處理 40
第四章 實證結果與討論 43
4.1 攝像頭選擇 43
4.2 JL-DCF於不同MODEL表現 44
4.3 不同刀具訓練結果 45
4.4 與其他的模型比較 46
第五章 結論與未來展望 488
5.1 結論 48
5.2 未來展望 49
參考文獻 50
[1]G. Kumar and P. K. Bhatia, "A Detailed Review of Feature Extraction in Image Processing Systems," 2014 Fourth International Conference on Advanced Computing & Communication Technologies, Rohtak, India, 2014
[2]Z. Zhang, "A flexible new technique for camera calibration," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, Nov. 2000
[3]蔣志宏(2018)。機器人學基礎。北京市:北京理工大学出版社。
[4]S. Foix, G. Alenya and C. Torras, "Lock-in Time-of-Flight (ToF) Cameras: A Survey," in IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, Sept. 2011
[5]3D 感測技術:什麼是飛時測距(ToF:Time of Flight):https://www.stockfeel.com.tw/3d%E6%84%9F%E6%B8%AC-%E9%A3%9B%E6%99%82%E6%B8%AC%E8%B7%9D-tof/
[6]淺談 ToF 3D 攝影機的原理:https://shop.playrobot.com/blog/posts/depthcamera-2
[7]Jason Geng, "Structured-light 3D surface imaging: a tutorial," Adv. Opt. Photon. 3, 128-160 (2011)
[8]Song Zhang,High-speed 3D shape measurement with structured light methods: A review,Optics and Lasers in Engineering,Volume 106,2018,
[9]關於結構光問題的總結:https://picture.iczhiku.com/weixin/message1593816937757.html
[10]三種主流深度相機介绍:https://www.cnblogs.com/li-yao7758258/p/11191878.html
[11]雙目結構光三維測量—基於雙目的原理:https://blog.csdn.net/qq_32638769/article/details/131991708
[12]淺談單目結構光原理:https://blog.csdn.net/limingmin2020/article/details/108547246
[13]RGB color spaces:
https://en.wikipedia.org/wiki/RGB_color_spaces
[14]HSL and HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
[15]YCbCr:
https://en.wikipedia.org/wiki/YCbCr
[16]Yanming Guo, Yu Liu, Ard Oerlemans, Songyang Lao, Song Wu, Michael S. Lew,Deep learning for visual understanding: A review,Neurocomputing,Volume 187,2016,
[17]Yifei Zhang, Désiré Sidibé, Olivier Morel, Fabrice Mériaudeau,Deep multimodal fusion for semantic image segmentation: A survey,Image and Vision Computing,Volume 105,2021,
[18]D. Feng et al., "Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges," in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1341-1360, March 2021
[19]Alzubaidi, L., Zhang, J., Humaidi, A.J. et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data 8, 53 (2021)
[20]深度學習:CNN原理:https://cinnamonaitaiwan.medium.com/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-cnn%E5%8E%9F%E7%90%86-keras%E5%AF%A6%E7%8F%BE-432fd9ea4935
[21]convolutional neural network:
https://iaic.nccu.edu.tw/column-articles/31
[22]卷積神經網路 (Convolutional Neural , CNN):https://hackmd.io/@allen108108/rkn-oVGA4
[23]深度學習-卷積神經網路-Pooling Layer(池化層):https://ithelp.ithome.com.tw/m/articles/10250745
[24]深度學習-卷積神經網路-Fully Connected Layer(全連接層):https://ithelp.ithome.com.tw/articles/10251253
[25]K. Fu, et al.,"Siamese Network for RGB-D Salient Object Detection and Beyond" in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 44, no. 09, pp. 5541-5559, 2022.
[26]K. Fu, D. Fan, G. Ji and Q. Zhao, "JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection," in 2020
[27]Y. Liu, Y. Peng, B. Wang, S. Yao and Z. Liu,(2017) "Review on cyber-physical systems," in IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 1, pp. 27-40, , doi: 10.1109/JAS.2017.7510349
[28]P. Hehenberger, B. Vogel-Heuser, D. Bradley, B. Eynard, T. Tomiyama, S. Achiche(2016), Design, modelling, simulation and integration of cyber physical systems: Methods and applications,
[29]Hongyi Liu, Lihui Wang,Remote human–robot collaboration: A cyber–physical system application for hazard manufacturing environment,Journal of Manufacturing Systems,Volume 54,2020
[30]K. L. Keung, C. K. M. Lee, P. Ji and K. K. H. Ng, "Cloud-Based Cyber-Physical Robotic Mobile Fulfillment Systems: A Case Study of Collision Avoidance," in IEEE Access, vol. 8, pp. 89318-89336, 2020
[31]Wang Wenna, Ding Weili, Hua Changchun, Zhang Heng, Feng Haibing, Yao Yao,A digital twin for 3D path planning of large-span curved-arm gantry robot,Robotics and Computer-Integrated Manufacturing,Volume 76,2022,
[32]F. Pires, A. Cachada, J. Barbosa, A. P. Moreira and P. Leitão, "Digital Twin in Industry 4.0: Technologies, Applications and Challenges," 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), Helsinki, Finland, 2019,
[33]Bao-Huy Huynh. (2019) “A Universal Methodology to Create Digital Twins for Serial and Parallel Manipulators.” 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). Doi: 10.1109/SMC.2019.8914195
[34]Fanuc官網:https://www.fanuc.com/


電子全文 電子全文(網際網路公開日期:20260731)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊