跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.170) 您好!臺灣時間:2024/12/02 15:47
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:李佾
研究生(外文):YI LEE
論文名稱:基於眼在手影像回授之機械臂系統設計與應用
論文名稱(外文):Design of eye-in-hand vision feedback for a robot manipulator system and its applications
指導教授:施慶隆施慶隆引用關係
指導教授(外文):Ching-Long Shih
口試委員:李文猶陳雅淑
口試委員(外文):Wen-Yo LeeYa-Shu Chen
口試日期:2016-06-22
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:中文
論文頁數:58
中文關鍵詞:色彩分割電腦視覺機械臂眼在手視覺系統影像回授控制
外文關鍵詞:Color SegmentationComputer VisionRobotic ArmEye-in-Hand Vision SystemImage Feedback Control
相關次數:
  • 被引用被引用:1
  • 點閱點閱:184
  • 評分評分:
  • 下載下載:9
  • 收藏至我的研究室書目清單書目收藏:0
本文旨在使用眼在手之影像系統辨識及定位三維工作空間之物體,並以機械臂影像回授控制完成視覺對正以及物體之抓取與放置等任務。發展之眼在手視覺對正系統之具體內容計有: (1)彩色影像顏色分割並據此偵測三維空間中物體之外形輪廓以及相機與物體之距離與方位角;(2)整合眼在手視覺系統與機械臂反運動學計算達到機械臂影像回授以及(3)計算物體影像特徵以及估測物體於三維空間之姿態並對正目標物體。最後,整合上述之機械臂、平動夾爪和眼在手視覺系統並基於電腦視覺回授完成三維空間物體辨識、量測以及抓取與放置等任務。實驗結果驗證本文所設計之機械臂及影像整合系統的正確性與有效性,整體物體對位控制之位置誤差小於1.2 mm以及角度誤差小於1.5度。
This paper aims to use the eye-in-hand vision system to identify and locate 3D objects in the work space, and the robotic system applies vision feedback control to execute tasks such as visual alignment as well as picking-and-placing objects. The specific functions of the developed vision alignment and control system are as follow: (1) performing color segmentation to obtain effective object segmentation results and to detect the contour of object and the distance and orientation between camera and object; (2) integrating eye-in-hand system and robot inverse kinematics calculation to achieve image feedback control and (3) computing the image feature of the object, estimating the posture of the object in the 3-dimensional space and aligning the target object. Finally, this work integrates the aforementioned robotic arm, flat gripper and eye-in-hand camera system and applies computer vision techniques to perform object identification, measurement, and picking-and-placing tasks. The experimental results have verified the correctness and validity of the proposed robot and vision integrated system with an overall alignment control error of 1.2 mm in positioning and 1.5 degrees in orientation angle.
摘要 I
Abstract II
誌謝 III
目錄 IV
圖表索引 VII
第一章 緒論 1
1.1 研究動機 1
1.2 文獻回顧 1
1.3 論文大綱 2
1.4 系統架構 2
第二章 物體影像辨識 3
2.1 顏色色彩空間 3
2.1.1 RGB色彩空間 3
2.1.2 HSV色彩空間 4
2.1.3 色彩空間 5
2.1.4 CIE Lab 色彩空間 5
2.2 色彩分割 7
2.2.1 色彩空間門檻值 8
2.2.2 最近鄰居分類法 10
2.3 物體形狀辨識 13
第三章 眼在手系統 15
3.1 機械臂系統 15
3.2 眼在手系統 15
3.3 相機成像 16
3.3.1 相機矩陣 16
3.3.2 相機校正 18
3.4 基於三維空間的物體座標計算 19
3.4.1 眼在手系統實現極線幾何 19
3.4.2 已知物體的特徵點的物體座標計算 21
第四章 視覺對正 25
4.1 影像特徵值 25
4.1.1 x軸與y軸偏移量之影像特徵值 26
4.1.2 z軸偏移量之影像特徵值 27
4.1.3 x軸與y軸旋轉量之影像特徵值 28
4.1.3 z軸旋轉量之影像特徵值 30
4.2 影像Jacobian矩陣 31
4.3 控制系統穩定性分析 33
4.3.1 目標物平移 33
4.3.2 目標物旋轉 34
4.4 系統動作決策 37
第五章 實驗結果 40
5.1 物體影像辨識實驗 40
5.2球心位置估算實驗 42
5.3 視覺對正實驗 43
5.4 單眼視覺深度量測實驗 47
5.5 物體夾取與放置實驗 51
5.6 相關論文比較 54
第六章 結論與建議 55
6.1 結論 55
6.2 建議 55
參考文獻 56
[1] Teerawat Tongloy and Simon X. Yangt, “An Image-Based Visual Servo Control System Based on an Eye-in-Hand Monocular Camera for Autonomous Robotic Grasping,” Computational Intelligence in Robotics and Automation, pp. 132-136, 2016.
[2] Christopher G. Healey, “A Perceptual Colour Segmentation Algorithm,” University of British Columbia Vancouver, Technical Report, 1996.
[3] Lina Li, De Xu, Xingang Wang, “A Survey on Path Planning Algorithms in Robotic Fibre Placement,” IEEE Control and Decision Conference, pp. 4704-4709, 2015.
[4] S. Y. Chen, “Kalman Filter for Robot Vision: A Survey,” IEEE Transcations on Industrial Electronics, pp. 4409-4420, 2012.
[5] Corneliu Lazar and Adrian Burlacu, “Image-Based Visual Servoling for Manipulation via Predictive Control – A Survey of Some Reslts,” Memoirs of the Scientific Sections of the Romanian Academy, Technical Report, 2016.
[6] Gabriel J. Garcia , Juan A. Corrales, Jorge Pomares and Fernando Torres, “Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain,” Sensor Multidisciplinary Digital Publishing Institute, pp. 9689-9733, 2009.
[7] Ling-Yi Xu, Zhi-Qiang Cao, Peng Zhao, Chao Zhou, “A New Monocular Vision Measurement Method to Estimate 3D Positions of Objects on Floor,” International Journal of Automation and Computing, pp. 159-168, 2017.
[8] Lee Elliot Weiss, “Dynamic Visual Servo Control of Robots : an Adaptive Image-Based Approach,” Carnegie-Mellon University The Robotics Institute, 1984.
[9] Biao Zhang, Emilio J. Gonzalez-Galvan, Jesse Batsche, Steven B. Skaar, Luis A. Raygozab and Ambrocio Loredo, “Precise and Robust Large-Shape Formation using Uncalibrated Vision for a Virtual Mold,” INTECH Computer vision, pp. 111-124, 2008.
[10] Camera-Space Manipulation Steven B. Skaar, William H. Brockman and R. Hanson, “Camera-Space Manipulation,” The International Journal of Robotics Research, pp. 20-32, 1987.
[11] Hanqi Zhuang, “Simultaneous Calibration of a Robot and a Hand-Mounted Camera,” IEEE Transcations on Robotics amd Automation, pp. 649-660, 1995.
[12] Billibon H. Yoshimi and Peter K.Allen, “Active, Uncalibrated Visual Servoing,” IEEE International Conference on Robotics and Automation, pp. 156-161, 1994.
[13] Biao Zhang, Jianjun Wang, Gregory Rossano and Carlos Martinez, “Vision-Guided Robotic Assembly Using Uncalibrated Vision,” IEEE International Conference on Mechatronics and Automation, pp. 1384-1389, 2001.
[14] Chih-Hung Wu, I-Sheng Lin, Ming-Liang Wei, and Tain-Yu Cheng, “Target Position Estimation by Genetic Expression Programming for Mobile Robots with Vision Sensors,” IEEE Transcations on Instrumentation and Measurement, pp. 3218-3230, 2013.
[15] Omar Tahri, Youcef Mezouar, Franc¸ois Chaumette and Peter Corke, “Generic Decoupled Image-Based Visual Servoing for Cameras Obeying the Unified Projection Model,” IEEE Transcations on Robotics, pp. 684-697, 2010.
[16] Pierluigi Cigliano, Vincenzo Lippiello, Fabio Ruggiero and Bruno Siciliano, “Robotic Ball Catching with an Eye-in-Hand Single-Camera System,” IEEE Transcations on Control Systems Technology, pp. 1657-1671, 2015.
[17] Shouren Huang, Yuji Yamakawa, Taku Senoo and Masatoshi Ishikawa, “Realizing Peg-and-Hole Alignment with One Eye-in-Hand High-Speed Camera,” International Conference on Advanced Intelligent Mechatronics, pp. 1127-1132, 2013.
[18] Joss Knight and Ian Rfid, “Automated Alignment of Robotic Pan-Tilt Camera Units Using Vision,” International Journal of Computer Vision, pp. 219-237, 2006.
[19] Zhengke Qin, Peng Wang, Jia Sun, Jinyan Lu, and Hong Qiao, “Precise Robotic Assembly for Large-Scale Objects Based on Automatic Guidance and Alignment,” IEEE Transcations on Instrumentation and Measurement, pp. 1398-1411, 2016.
[20] Te Tang, Hsien-Chung Lin, Yu Zhao, Wenjie Chen and Masayoshi Tomizuka, “Autonomous Alignment of Peg and Hole by Force/Torque Measurement for Robotic Assembly,” IEEE International Conference on Automation Science and Engineering, pp. 162-167, 2016.
[21] Shubham Jain, Prashant Gupta and Vikash Kumar, “A Force-Controlled Portrait Drawing Robot,” IEEE International Conference on Industrial Technology, pp. 3160-3165, 2015.
[22] Zhengyou Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transcations on Pattern Analysis and Machine Intelligence, pp. 1330 -1334, 2000.
[23] David A. Forsyth and Jean Ponce, Computer Vision a Modern Approach, Pearson Education, 2003.
[24] J. Y. Bouguet, “Camera Calibration Toolbox for MATLAB,” see http://www.vision.caltech.edu/bouguetj, 2003.
[25] 施慶隆、李文猶,機電整合控制--多軸運動設計與應用,第三版,全華圖書
股份有限公司,2015。
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊