跳到主要內容

臺灣博碩士論文加值系統

(44.211.31.134) 您好!臺灣時間:2024/07/25 18:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:馮威
研究生(外文):FENG,WEI
論文名稱:智能機器人服務系統的設計與實現
論文名稱(外文):The Design and Implementation of Intelligent Robot Service System
指導教授:張榮貴張榮貴引用關係
指導教授(外文):ZHANG,RONG-GUI
口試委員:陳璽煌薛幼苓蔡榮婷
口試委員(外文):CHEN,XI-HUANGXUE,YOU-LINGCAI,RONG-TING
口試日期:2020-07-21
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:57
中文關鍵詞:機器手臂自走車( 移動機器人 )視覺系統標籤Robot Operating System (ROS)
外文關鍵詞:6-DoF robotic armAGV(Automated Gided Vhicle)Visual systemTagRobot Operating System (ROS)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:650
  • 評分評分:
  • 下載下載:59
  • 收藏至我的研究室書目清單書目收藏:1
工業發展的快速變遷,很快地來到了工業4.0時代,許多東西開始結合了人工智慧、大數據,產生了許多智慧型的機器,智慧工廠生產線上的機器手臂就是其中一個智慧型的產物及倉儲貨櫃中,運送貨物的倉儲機器人,皆是利用電腦、數位化藉此達到更便捷、高效率的產能。
除了工廠產線外,上述的機器手臂、倉儲機器人皆能為人類帶來更便利的生活。由每年服務型機器人成長比例與需求量來看,加上餐飲產業龐大市場的連年成長,因此市面上出現了,如 : 泡咖啡的機器手臂、送餐機器人等等,節省人力的設備,由此可知若是將手臂與移動機器人結合勢必能帶來更加方便且解決更複雜的工作,不過現實中由於自走車產生出的誤差,可能會導致手臂部分夾不準,使工作失敗或發生危險,因此一般環境也較少出現兩者整合的設備,為此我們希望提出一種服務型機器人系統,能夠結合兩者,並利用視覺系統來改善因為誤差導致機器手臂夾取不穩的問題,以至我們提出本篇論文。
本篇論文我們提出一個系統,這個系統在我們設計的環境中能利用自走車及手臂完成送餐點,回收餐點的工作,希望能利用這個系統減少人力的支出,而如何準確的夾取三維空間中的杯子,是我們必須完成的,本篇提到如何利用視覺系統,藉由辨識標籤的方式校正手臂的資訊,藉此能更加準確夾取,提升工作的完成率。

The rapid changes in industrial development soon came to the era of Industry 4.0. Many things began to combine artificial intelligence and big data to produce many intelligent machines such as the robotic arm on production line of the smart factory and the Automated guided vehicle (AGV) on automated warehouses. Both of that using computer and digitization to achieve more convenient and efficient production capacity.

In addition to the factory production line, the robotic arms and storage robots can bring more convenient life for humans. Judging from the annual growth rate of service robots and the continuous growth of the catering industry, the market has appeared, such as coffee-making robotic arm, food delivery robot, etc. Saving manpower equipment it can be seen that if the arm is combined with the mobile robot, it will bring more convenient and solve more complicated work, but in reality AGV errors may lead to the robotic arm work failure or danger.Therefore, the general environment also seldom has the equipment that integrates the two. For this reason, we hope to propose a service robot system that can combine the two and use the vision system to improve the problem of the gripping error, so that we proposed this paper.

In this paper, we proposed a system that can use the Automated Gided Vhicle and robotic arm to complete the meal delivery and meal recovery in the environment we designed. I hope that this system can be used to reduce manpower expenditure and how to accurately grasp the cup in the three-dimensional space is what we must complete. This paper mentioned how to use the visual system to correct the information of the robotic arm by recognizing the tag, which can more accurately grasp and improve the completion rate of the work.

第一章、 緒論 1
1. 市場趨勢與潮流 1
2. 研究動機與目的 3
第二章、 背景 4
1. 智慧型機器人 4
1.1. 工業機器人 5
1.2. 戰鬥機器人 5
1.3. 科研機器人 6
1.4. 服務型機器人 6
2. Robot Operating System (ROS) 7
2.1. ROS Master 7
2.2. Node 9
2.3. Message 10
2.4. Topic 10
2.5. Service 11
2.6. Actionlib 11
2.7. CvBridge 12
3. 實驗工具 13
3.1. EAI - N1移動平台 13
3.2. Neocobot OMNI6 15
3.3. Intel D435 Camera 18
3.4. EDIMAX藍牙發射器 19
第三章、 相關研究 20
1. 服務型機器人 20
2. SLAM 20
3. 路徑規劃 21
3.1. A* search algorithm 21
4. 機械手臂的運動 22
第四章、 研究方法 24
1. 實驗環境 24
1.1. 硬體設備 24
1.2. 軟體設備 24
2. 系統架構 24
2.1. 硬體架構 24
2.2. 軟體架構 26
3. 方法 27
3.1. 啟動 27
3.2. 前處理作業 27
3.3. 等待指令 30
3.4. 送餐、收餐執行 31
第五章、 實驗結果 37
1. 實驗流程 37
2. 安插中繼點提升路徑規劃的結果 39
3. 校正頻率的調整 40
4. 實作結果 41
4.1. 實驗環境 41
4.2. 情境一(運送餐點) 42
4.3. 情境二(回收餐點) 43
第六章、 結論 45
第七章、 參考資料 46

[1] “ Industry 4.0 ”, Retrieved from https://zh.wikipedia.org/wiki/%E5%B7%A5% E6%A5%AD4.0 .
[2] “ ROS tutorial ”, Retrieved from http://wiki.ros.org/ .
[3] “ librealsense ”, https://github.com/IntelRealSense/librealsense.
[4] “ Bluetooth ”, Retrieved from https://en.wikipedia.org/wiki/Bluetooth .
[5] T. Jyh-Hwa and L. S. Kuo, “ The development of the restaurant service mobile robot with a Laser positioning system ”, 2008 27th Chinese Control Conference, 2008.
[6] G Dissanayake, H Durrant-Whyte and T. Bailey, “ A computationally efficient solution to the simultaneous localisation and map building (SLAM) problem[C] ” , Proceedings. ICRA'00. IEEE International Conference on. IEEE, vol. 2, pp. 1009-1014, 2000.
[7] J S Gutmann and K. Konolige, “ Incremental mapping of large cyclic environments[C] ”, Proceedings. 1999 IEEE International Symposium on. IEEE, pp. 318-325, 1999.
[8] M Montemerlo, S Thrun, D Koller et al, “ FastSLAM: A factored solution to the simultaneous localization and mapping problem[C] ” , IIAaai/iaai, pp. 593-598, 2002.
[9] M Montemerlo, S Thrun, D Koller et al, “ FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges. 31 David Trnqvist Thomas B[J] ”, Schn Rickard Karlsson and Fredrik Gustafsson. Particle Filter SLAM with High Dimensional Vehicle Model.
[10] D Hahnel, W Burgard, D Fox et al, “ An efficient FastSLAM algorithm for generating maps of large-scale cyclic environments from raw laser range measurements[C] ”, Proceedings. 2003 IEEE/RSJ International Conference on. IEEE, vol. 1, pp. 206-211, 2003.
[11] Jinglin Zhang, Yongsheng Ou∗, Guolai Jiang and Yimin Zhou ,“ An Approach to Restaurant Service Robot SLAM ”, 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2016.
[12] P. Del Moral, “ Non-linear filtering: interacting particle resolution[J] ”, Markov processes and related fields, vol. 2, no. 4, pp. 555-581, 1996.
[13] R. C. Smith and P. Cheeseman, “ On the Representation and Estimation of Spatial Uncertainty ” , The International Journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.
[14] R. Smith, M. Self, and P. Cheeseman, “ Estimating uncertain spatial relationships in robotics ” , Proceedings. 1987 IEEE International Conference on Robotics and Automation.
[15] “ Why SLAM is Becoming the New GPS ”, Retrieved from https://www.edge-ai-vision.com/2017/04/why-slam-is-becoming-the-new-gps/.
[16] Christopher Wilt and Jordan Thayer and Wheeler Ruml, “ A Comparison of Greedy Search Algorithms ” , Proceedings of the Third Annual Symposium on Combinatorial Search.
[17] “Best-First-Search algorithm”,Retrieved from https://www.itread01.com/content/ 1545716345.html.
[18] “ Dijkstra's search algorithm ”, Retrieved from https://en.wikipedia.org/wiki/ Dijkstra%27s_algorithm.
[19] A. Patel , “ Introduction to A* ” Red Blob Games. [Online]. Retrieved from http://theory.stanford.edu/~amitp/GameProgramming/AStarComparison.html.
[20] “ Kinematics ”, Retrieved from https://en.wikipedia.org/wiki/Kinematics.
[21] C. Chia-Hung, H. Han-Pang, and L. Sheng-Yen, “ Stereo-based 3D localization for grasping known objects with a robotic arm system ” in Intelligent Control and Automation (WCICA), 2011 9th World Congress on, 2011, pp. 309-314
[22] C. C. Lin, P. Gonzalez, M. Y. Cheng, G. Y. Luo, and T. Y. Kao, “ Vision based object grasping of industrial manipulator ” in 2016 International Conference on Advanced Robotics and Intelligent Systems (ARIS), 2016, pp. 1-5.
[23] Taryudi and Ming-Shyan Wang, “ 3D Object Pose Estimation Using Stereo Vision for Object Manipulation System”, Proceedings of the 2017 IEEE International Conference on Applied System Innovation.
[24] I Yung,Yamuna Maccarana, Gabriele Maroni, Fabio Previdi,“ Partially structured robotic picking for automation of tomato transplantation ”, Published in: 2019 IEEE International Conference on Mechatronics (ICM).
[25] “ Detection of ArUco Markers – OpenCV ”, https://docs.opencv.org/trunk/d5/ dae/tutorial_aruco_detection.html.
[26] Christopher Lehnert Inkyu Sa, Christopher McCool, Ben Upcroft and Tristan Perez, “ Sweet Pepper Pose Detection and Grasping for Automated Crop Harvesting ”, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[27] Yao Wang, Ying Xu, Xiaohui Zhang, Zhen Sun, Yafang Zhang, Guoli Song, Junchen Wang, “ 3D Pose Estimation for Robotic Grasping Using Deep Convolution Neural Network ”, Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics.
[28] K. Simonyan and A. Zisserman, “ Very deep convolutional networks for large-scale image recognition ” in Computer Science, 2014.
[29] Z. Kalal, K. Mikolajczyk and J. Matas, “ Tracking-learning-detection ”, IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 34, no. 7, pp. 1409-22, 2012.
[30] C. Rother, V. Kolmogorov and A. Blake, “ GrabCut: interactive foreground extraction using iterated graph cuts ” in ACM SIGGRAPH, ACM, vol. 23, pp. 309-314, 2004.
[31] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn and A. Zisserman, “ The pascal visual object classes (voc) challenge ”, International Journal of Computer Vision, vol. 88, no. 2, pp. 303-338, 2010.
[32] Dieter Fox, Wolfram Burgard, Sebastian Thrun,“ The dynamic window approach to collision avoidance ”, IEEE Robotics & Automation Magazine, 1997.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top