跳到主要內容

臺灣博碩士論文加值系統

(3.235.56.11) 您好!臺灣時間:2021/07/29 04:14
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:王子維
研究生(外文):WANG, ZI-WEI
論文名稱:人機即時三維動態追蹤與智能化防碰撞監控系統
論文名稱(外文):Dynamic Real-time 3D Tracking and Intelligent Collision Avoidance for Human-Machine Collaboration
指導教授:張文陽
指導教授(外文):CHANG, WEN-YANG
口試委員:蕭俊卿張哲華
口試委員(外文):HSIAO, CHUN-CHINGCHANG, CHE-HUA
口試日期:2020-07-13
學位類別:碩士
校院名稱:國立虎尾科技大學
系所名稱:機械與電腦輔助工程系碩士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:76
中文關鍵詞:點雲模型點雲處理動態3D追蹤智能防碰撞機械手臂
外文關鍵詞:Point cloud modelPoint cloud processingDynamic 3D trackingIntelligent collision avoidanceRobot
相關次數:
  • 被引用被引用:0
  • 點閱點閱:67
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
近年來隨著自動化與智能化的技術興起,機械手臂與無人車也大量的應用在各個工業中,進而成為產線中不可或缺的機械之一,然而機械手臂與無人車可進行較為複雜的路徑工作,其工作的範圍也較為廣泛,因此在有許多機械與人員移動的產線之中,碰撞的發生機率也相對的較高,即使透過人員做現場的即時監控也可能因人員疏失而造成危險的發生,因此機械手臂與無人車的工作安全考量也變得相當的重要。有鑑於此,本研究開發一套針對機械手臂與無人車的智能化防碰撞系統,以深度相機來擷取場景的深度資訊,並以此建構出場景的點雲模型,經由點雲模型的演算法計算後,針對機械手臂進行三維動態追蹤,並監控機械手臂周遭的外在環境,透過距離檢測來預防機械手臂的碰撞發生。本研究主要分為四大部分,其一透過深度相機擷取現場環境的深度資訊,並將深度資訊轉換點雲數據,來建置出現場整體環境的點雲模型,利用統計異常消除濾波器與網格體素濾波器進行點雲的前置處理,再透過歐式聚類分割進行點雲模型的分類,並建置出機械手臂與周遭環境點雲模型,以Octree半徑搜索分別針對兩筆點雲模型進行計算,以此來更新點雲模型姿態,達到對機械手臂三維姿態的動態追蹤;其二為針對機械手臂的智能化防碰撞系統,透過Octree空間變化檢測對場景點雲進行計算,將進入場景的人員與機械等外物判別出來,並建構出外物點雲模型,再以Octree半徑搜索對機械手臂的點雲模型進行計算,將計算出外物與機械手臂的點雲模型間最小距離,並透過自行設定的閥值來做為安全距離,以此來檢測機械手臂與外物的碰撞可能,並由實驗來得知點雲模型演算法對於本系統的整體影響,透過實驗得知本系統每一幀的運算時間約為0.067s,而在機械手臂移動線速度為30mm/s時,其系統精度約為0.76cm;其三為機械手臂與智能防碰撞系統進行整合,透過自行設計的系統介面與機械手臂控制器進行通訊連線,PC端以TCP/IP通訊協定與機械手臂控制器進行資訊傳遞,根據智能防碰撞系統所計算的判定結果來對機械手臂進行命令控制,以此來達到當機械手臂於無人監控時,也能自行的判定周遭的狀況進行碰撞的預防動作;其四為雲端聯網系統,由PC端擷取機械手臂狀態資訊,以MQTT進行雲端訊息管理,最後再由雲端網頁看板將機械手臂狀態資訊進行呈現。將上述系統進行整合,以此達成無人化與智能化的目標。
In recent years, intelligent technology is rising, the automation industry has become trend. Robots and AGVs are also a large number of applications in various industries and thus become one of the indispensable machines in the assembly line. However, robots and AGVs can carry out more complex and widely works. But the collision between the robot and the people is a high probability, so the collision detection of the robot is quite important. For the above reasons, this study develops the Intelligent system of collision avoidance for robots and AGVs. The system can capture the depth information of the scene by the depth camera, construct the point cloud models of the scene, track the 3D pose of the robot, and monitor the surrounding environment of the robot by the point cloud model algorithms. First, the point cloud models of the robot and the environment are established by the point cloud pre-processing and separated by the euclidean cluster extraction. The point cloud model achieves tracking the 3D pose of the robot by the octree radius search. Second, the system detects people and machines entered the scene by octree point cloud change detector. A minimum distance between the point cloud models of the object and the robot is calculated by the octree point cloud search. The experiment shows that the system calculation time of each frame is about 0.067 and the distance error of system collision is 0.76 cm when the linear velocity of the robot is 30 mm/s. Third, the part focus on the connection between the intelligent system of collision avoidance and the robot controller. The calculation results of the collision avoidance upload to the robot by the TCP/IP protocol. Fourth, this part mainly introduces the cloud platform, the status information is collected from the robot, the robot controller uploads the information to the cloud platform to manage and store, and the cloud interface displays the robot status information.
摘 要............i
Abstract............ii
誌 謝............iii
目 錄............iv
表 目 錄............vii
圖 目 錄............viii
符號說明............ix
第 一 章 緒 論............1
1.1 研究背景與動機............1
1.2 研究目的............2
1.3 論文架構............2
1.4 論文貢獻............3
第 二 章 文獻回顧 ............4
2.1 3D視覺輔助應用之國內外相關研究............4
2.2 機械手臂防碰撞之應用國內外相關研究............8
第 三 章 研究架構與方法............14
3.1 研究架構與流程............14
3.2 硬體及系統架構............15
3.2.1 三維點雲模型建立之硬體設備............15
3.2.2 數據量測硬體設備介紹............16
3.2.3 實驗機台設備介紹............17
3.3 深度相機成像原理與點雲前處理............18
3.3.1 深度相機-TOF原理............18
3.3.2 直通濾波器............21
3.3.3 統計異常消除濾波器............22
3.3.4 網格體素濾波器............23
3.4 三維動態追蹤與智能防碰撞系統............26
3.4.1 歐式聚類分割............26
3.4.2 Octree半徑搜索演算法............28
3.4.3 Octree空間變化檢測演算法............30
3.5 機械手臂通訊連線與控制............32
3.5.2 監控介面與防碰撞系統通訊連線............32
3.5.3 Fanuc機械手臂通訊與控制系統............34
3.6 雲端平台建置............37
3.6.1 雲端系統架構介紹............37
3.6.2 MQTT通訊協定介紹............38
3.6.3 雲端網頁系統介紹............38
第 四 章 實驗結果............40
4.1 機械手臂動態追蹤系統建置............41
4.1.1 環境點雲數據系統建置............41
4.1.2 點雲模型數據處理............42
4.1.3 機械手臂點雲模型建置............44
4.1.4 機械手臂動態追蹤............45
4.2 智能防碰撞系統建置............46
4.2.1 環境點雲的外物偵測............46
4.2.2 碰撞距離檢測及預防實驗............47
4.2.3 機械手臂之智能防碰撞系統............50
4.3 智能防碰撞系統實驗............52
4.3.1 濾波器對系統影響實驗............52
4.3.2 Octree解析度對於系統影響實驗............54
4.3.3 防碰撞偵測距離對系統精度實驗............55
4.3.4 機械手臂線速度對系統精度實驗............56
4.4 雲端平台系統建置............57
4.4.1 MQTT雲端訊息管理系統建置............57
4.4.2 雲端網頁建置............59
第 五 章 結論與未來展望............60
參考文獻............61
Extended Abstract............63


[1]Peter Chemweno, Liliane Pintelon, Wilm Decre,” Orienting safety assurance with outcomes of hazard analysis and risk assessment: A review of the ISO 15066 standard for collaborative robot systems”, Safety Science, 129, 104832.
[2]Yu Quan Leng, Zheng Cang Chen, Xu He, Yang Zhang, and Wei Zhang, “Collision Sensing Using Force/Torque Sensor”, Journal of Sensors, 2016, 1–10.
[3]Touché Solutions T-Skin,” https://www.touche.solutions/solutions/”.
[4]Rafiq Ahmad, Peter Plapper (2015). Generation of safe tool-path for 2.5D milling/drilling machine-tool using 3D ToF sensor. CIRP Journal of Manufacturing Science and Technology, 10, 84–91.
[5]佘東穎,2014, ”應用立體視覺3D量測技術於五軸工具機防碰撞”, 國立中正大學電機工程系研究所, 碩士學位論文.
[6]Jun Xiong, Menghan Shia, Yanping Liub, Ziqiu Yina. (2020). Virtual binocular vision sensing and control of molten pool width for gas metal arc additive manufactured thin-walled components. Additive Manufacturing, 33, 101121.
[7]Zexiao Xie, Pengfei Zong , Peng Yao, Ping Ren (2019). Calibration of 6-DOF industrial robots based on line structured light. Optik, 183, 1166–1178.
[8]Salvatore Cafiso, Alessandro Di Graziano, Giuseppina Pappalardo (2017). In-vehicle stereo vision system for identification of traffic conflicts between bus and pedestrian. Journal of Traffic and Transportation Engineering , 4(1), 3–13.
[9]Kwangwoo Wi, Vignesh Suresh, Kejin Wang, Beiwen Li, Hantang Qin (2020). Quantifying quality of 3D printed clay objects using a 3D structured light scanning system. Additive Manufacturing, 32, 100987.
[10]Ladislao Mathe, Agustin Caverzasi, Fernando Saravia, Gabriel Gomez, Juan Pedroni (2013). Detection of Human-Robot Collision using Kinetic. IEEE Latin America Transactions, 11(1), 143–148.
[11]林家璿,2018, ” 工業自動化生產之碰撞偵測技術研發”, 國立中正大學資訊工程研究所, 碩士學位論文.
[12]Sang-Duck Leel, Jae-Bok Songl. Collision Detection for Safe Human-Robot Cooperation of a Redundant Manipulator. School of Mechanical Engineering, Korea University, Seoul, 133-791, Korea.
[13]Balázs Dániel, Péter Korondi, Trygve Thomessen (2012). Joint Level Collision Avoidance for Industrial Robots. IFAC Proceedings Volumes, 45(22), 655–658.
[14]Dong Han, Hong Nie, Jinbao Chen, Meng Chen(2018). Dynamic obstacle avoidance for manipulators using distance calculation and discrete detection. Robotics and Computer-Integrated Manufacturing, 49, 98–104.
[15]Bernard Schmidt, Lihui Wang (2014). Depth camera based collision avoidance via active robot control. Journal of Manufacturing Systems, 33(4), 711–718.
[16]Mohammad Safeea, Pedro Neto(2019). Minimum distance calculation using laser scanner and IMUs for safe human-robot interaction. Robotics and Computer-Integrated Manufacturing, 58, 33–42.
[17]Radu Horaud, Miles Hansard, Georgios Evangelidis1, Clément Ménier (2016). An overview of depth cameras and range scanners based on time-of-flight technologies. Machine Vision and Applications, 27(7), 1005–1020.
[18]Radu Bogdan Rusu, “Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments”, Institut für Informatik , der Technischen Universität München.
[19]Fengjun Hu, Yanwei Zhao, “DISCRETE POINT CLOUD FILTERING AND SEARCHING BASED ON VGSO ALGORITHM”, 27th Conference on Modelling and Sumulation, May 2013.
[20]Radu Bogdan Rusu (2010). Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. KI - Künstliche Intelligenz, 24(4), 345–348.
[21]Jens Behley, Volker Steinhage, and Armin B. Cremers (2015). Efficient radius neighbor search in three-dimensional point clouds. 2015 IEEE International Conference on Robotics and Automation (ICRA), 3625–3630.
[22]Kun Liu, Jan Boehm, Christian Alis(2016). CHANGE DETECTION OF MOBILE LIDAR DATA USING CLOUD COMPUTING. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B3, 309–313.
[23]Chapter 2—Basic TCP Sockets.
[24]Cubie, MQTT教學(一):MQTT, 2017.
[25]何信昱, IOT視覺化開發工具, MAKERPRO , 2017.
[26]許嘉洧,2018, ” 基於格雷碼結構光輔助工具機之工件自動定位”, 國立虎尾科技大學, 碩士學位論文.
[27]簡駿宥,2019, ” 基於物件姿態估測導引機械手臂智慧取放與加工系統”, 國立虎尾科技大學, 碩士學位論文.
[28]林聖祐,2018, ” 基於離線編譯之六軸機械手臂銑削路徑規劃”, 國立虎尾科技大學, 碩士學位論文.

電子全文 電子全文(網際網路公開日期:20250826)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top