(3.92.96.236) 您好!臺灣時間:2021/05/06 23:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:江啟帆
研究生(外文):Chi-Fan Chiang
論文名稱:運用飛行及監視攝影機之同步定位與校準系統
論文名稱(外文):Simultaneous Localization and Calibration System Employing Flying and Surveillance Cameras
指導教授:張文中
指導教授(外文):Wen-Chung Chang
口試委員:張文中陳詩豐熊甘霖姚立德
口試委員(外文):Wen-Chung Chang
口試日期:2016-07-27
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:104
語文別:中文
中文關鍵詞:擴展型卡曼濾波器、反深度、單眼視覺、同步定位與建圖、機械手臂、飛行攝影機、視覺伺服
外文關鍵詞:Extended Kalman FilteringManipulatorVisual servo controlQuadcoptersInverse DepthMonocular Vision
相關次數:
  • 被引用被引用:0
  • 點閱點閱:64
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出一種運用飛行攝影機與監視攝影機即時校準三維攝影機與機械手臂基底轉換關係的系統,此系統可應用於機械手臂自主跟隨物件。執行跟隨任務前,三維攝影機與機械手臂可運用監視攝影機觀察特定圖案進行校準,或使用飛行攝影機藉由擴展型卡爾曼濾波器之同步定位與建圖 (Extended Kalman Filtering - Simultaneous Localization and Mapping, EKF- SLAM),遞迴估測更新特徵點於世界座標系上的位置後推算機械手臂座標系統與攝影機座標系統之間的轉換關係。 完成機械手臂與三維攝影機校準後,藉由三維攝影機擷取影像觀察目標物,經由影像處理後得到機械手臂末端與目標物件的三維特徵點與姿態,進而藉由適當定義的編碼誤差,以視覺伺服控制器計算控制命令,控制機械手臂自主跟隨目標物。本系統經實驗驗證可確實校準並進而達成機械手臂自主跟隨物件之任務。
A simultaneous localization and calibration system is proposed to determine the transformation between a 3D camera and a manipulator online. With a flying camera and a surveillance camera, the system can be applied for a manipulator to follow a moving object. For the purpose of performing calibration, the transformation between a 3D camera and a manipulator can be determined by observing specific patterns with a surveillance camera, or recursively estimating and updating the positions of feature points in the world frame with a flying camera. The latter one involves simultaneous localization and mapping using the extended Kalman filtering. With a 3D camera as the vision sensor, a visual servo controller can thus be designed with appropriate encoded error to drive the manipulator to follow a moving object autonomously. The proposed approach has been successfully validated by experiments.
中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
英文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
第一章 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 研究動機及目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 文獻回顧 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 論文具體成果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 論文章節瀏覽 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
第二章 系統簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 系統架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 系統流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
第三章 SURF 特徵擷取、 描述與比對. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 特徵擷取 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 特徵描述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 特徵比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
第四章 飛行攝影機即時定位與地圖建構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1 飛行攝影機運動模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 量測模型 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 新增特徵 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4 判別特徵收斂與刪除相似特徵 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.5 刪除特徵 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6 擴展型卡爾曼濾波器同步定位與特徵點定位更新. . . . . . . . . . . . . 24
第五章 機械手臂基底與三維攝影機之座標系統校準 . . . . . . . . . . . . . . . . . . . . . 27
5.1 監視攝影機與三維攝影機的座標系統轉換校準 . . . . . . . . . . . . . . . 27
5.2 監視攝影機與機械手臂基底的座標系統轉換校準. . . . . . . . . . . . . 32
5.3 機械手臂基底與三維攝影機的轉換關係校準基於監視攝影機 . . 33
5.4 基於 EKF-SLAM 運用三維特徵點建立座標系統 . . . . . . . . . . . . . 34
5.4.1 運用同一平面上的三維特徵點建立平面方程式 . . . . . . . . . . . . . 35
5.4.2 建立機械手臂基底座標系統 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 機械手臂基底與三維攝影機的轉換關係校準基於飛行攝影機 . . 37
第六章 視覺伺服之物體追蹤任務 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.1 建立夾爪末端座標系與物件期望座標系 . . . . . . . . . . . . . . . . . . . . . 41
6.2 Cartesian-based視覺伺服控制器. . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
第七章 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.1 實驗設備 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2.1 物體追蹤任務基於監視攝影機取得三維攝影機與機械手臂基
底的轉換關係 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.2.2 物體追蹤任務基於飛行攝影機取得三維攝影機與機械手臂基
底的轉換關係 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
第八章 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2 未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
附錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
A 作者簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
[1] K. Hosoda, K. Sakamoto, and M. Asada, “Trajectory generation for obstacle avoidance of uncalibrated stereo visual servoing without 3D reconstruction,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Pittsburgh, U.S.A., vol. 1, Aug. 1995, pp. 29–34.
[2] J. Michels, A. Saxena, and A. Y. Ng, “High speed obstacle avoidance using monocular vision and reinforcement learning,” in Proc. of the 22nd International Conference on Machine Learning, New York, U.S.A., 2005, pp. 593–600.
[3] K. Sabe, M. Fukuchi, J.-S. Gutmann, T. Ohashi, K. Kawamoto, and T. Yoshigahara, “Obstacle avoidance and path planning for humanoid robots using stereo vision,” in Proc. of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, vol. 1, Apr. 2004, pp. 592–597.
[4] W.-C. Chang, “An on-line calibrated visual intelligent space for navigation and control of mobile robots,” in Proc. of the 2009 ICROS-SICE International Joint Conference, Fukuoka, Japan, Aug. 2009, pp. 4798–4803.
[5] W.-C. Chang and P.-R. Chu, “An intelligent space for mobile robot navigation with on-line calibrated vision sensors,” in Proc. of the 11th International Conference on Control, Automation, Robotics and Vision, Singapore, Dec. 2010, pp. 1452–1457.
[6] D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Autonomous Robots, vol. 8, no. 2, pp. 161–171, Apr. 2000.
[7] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, Oct. 1996.
[8] A. Davison, I. Reid, N. Molton, and O. Stasse, “Monoslam: Real-time single camera slam,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052–1067, Jun. 2007.
[9] L. M. Paz, P. Pinies, J. Tardos, and J. Neira, “Large-scale 6-dof slam with stereo-in-hand,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 946–957, Oct. 2008.
[10] P. Pinies and J. Tardos, “Large-scale slam building conditionally independent local maps: Application to monocular vision,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1094–1106, Oct. 2008.
[11] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,” IEEE Transactions on Robotics and Automation Magazine, vol. 13, no. 2, pp. 99–110, Jun. 2006.
[12] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, Oct. 1996.
[13] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, Dec. 2006.
[14] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches [Tutorial],” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, Mar. 2007.
[15] G. Hager, W.-C. Chang, and A. Morse, “Robot hand-eye coordination based on stereo vision,” IEEE Control Systems, vol. 15, no. 1, pp. 30–39, Feb. 1995.
[16] W.-F. Xie, Z. Li, X.-W. Tu, and C. Perron, “Switching control of imagebased visual servoing with laser pointer in robotic manufacturing systems,”IEEE Transactions on Industrial Electronics, vol. 56, no. 2, pp. 520–529, Feb. 2009.
[17] M. Iwatsuki and N. Okiyama, “A new formulation of visual servoing based on cylindrical coordinate system,” IEEE Transactions on Robotics, vol. 21, no. 2, pp. 266–273, Apr. 2005.
[18] W.-C. Chang, “Binocular vision-based 3-D trajectory following for autonomous robotic manipulation,” Robotica, vol. 25, pp. 615–626, Sep. 2007.
[19] N. H. Quach and M. Liu, “Visual based tracking of planar robot arms: a scheme using projection matrix,” in Proc. of 2003 IEEE International Conference on Robotics, Intelligent Systems and Signal Processing, Changsha, China, Oct. 2003, pp. 588–593.
[20] W.-C. Chang and C.-H. Wu, “Automated bin-picking with active vision,” Key Engineering Materials: Precision Engineering and Nanotechnology V, vol. 625, pp. 496–504, Oct. 2014.
[21] W.-C. Chang, V.-T. Nguyen, and P.-R. Chu, “Reconstruction of 3D contour with an active laser-vision robotic system,” Asian Journal of Control, vol. 14, no. 2, pp. 400–412, Mar. 2012.
[22] W.-C. Chang, C.-H. Wu, J.-J. Jeng, and B.-T. Jiang, “An automatic binpicking system for plumbing parts,” in Proc. of the 2012 CACS International Automatic Control Conference, National Formosa University, Yunlin, Taiwan, Dec. 2012.
[23] W.-C. Chang, C.-H. Wu, S.-C. Lu, and W.-Y. Chiu, “An active vision-basedbin-picking system,” in Proc. of 5th International Conference of Asian Society for Precision Engineering and Nanotechnology (ASPEN2013), Taipei, Taiwan, Nov. 2013.
[24] L. Wang, J. Mills, and W. Cleghorn, “Automatic microassembly using visual servo control,” IEEE Transactions on Electronics Packaging Manufacturing, vol. 31, no. 4, pp. 316–325, Oct. 2008.
[25] S. Ralis, B. Vikramaditya, and B. Nelson, “Micropositioning of a weakly calibrated microassembly system using coarse-to-fine visual servoing strategies,” IEEE Transactions on Electronics Packaging Manufacturing, vol. 23, no. 2, pp. 123–131, Apr. 2000.
[26] W.-C. Chang, Y.-H. Weng, Y.-H. Tsai, and C.-L. Chang, “Automatic robot assembly with eye-in-hand stereo vision,” in Proc. of 2011 9th World Congress on Intelligent Control and Automation (WCICA), Taipei, Taiwan, Jun. 2011, pp. 914–919.
[27] S. Xiao and Y. Li, “Visual servo feedback control of a novel large working range micro manipulation system for microassembly,” Journal of Microelectromechanical Systems, vol. 23, no. 1, pp. 181–190, Feb. 2014.
[28] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, Nov. 2000.
[29] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of the 4th Alvey Vision Conference, Manchester, UK, 1988, pp. 147–151.
[30] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004.
[31] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, pp. 346–359, 2008.
[32] D.-Y. Hung, “Robot visual simultaneous localization, mapping and moving object tracking using extended Kalman filter,” Master’s thesis, epartment of Mechanical and Electro-Mechanical Engineering, Tamkang University, New Taipei City, Taiwan, R.O.C., Jul. 2010.
[33] J. Civera, A. Davison, and J. Montiel, “Inverse depth parametrization for monocular slam,” IEEE Transactions on Robotic, vol. 24, no. 5, pp. 932–945, Oct. 2008.
[34] W.-C. Chang, C.-H. Wu, and S.-L. Cian,“Simultaneous localization and transformation employing monocular vision,” to appear in Proc. of 2016 International Conference on Advanced Robotics and Intelligent Systems, Taipei, Taiwan, Sep. 2016.
[35] K. Hosoda, K. Sakamoto, and M. Asada, “Trajectory generation for obstacle avoidance of uncalibrated stereo visual servoing without 3D reconstruction,” in Proc. of 2016 International Conference on Advanced Robotics and Intelligent Systems, Pittsburgh, PA, Aug. 1995.
[36] W.-C. Chang and A.S. Morse, “Six degree-of-freedom task encoding invision-based control systems,” in Proc. of the 14th World Congress, International Federation of Automatic Control, China, Jul. 1999, pp. 311–316.
[37] W.-C. Chang, J. P. Hespanha, A. S. Morse, and G.D. Hager, “Task reencoding in vision-based control systems,” in Proc. of the 36th IEEE Conference on Decision and Control, San Diego, CA U.S.A., Dec. 1997, pp. 48–54.
[38] J. J.Craig, Introduction to Robotics: Mechanics and Control, Upper Saddle River.NJ: Prentice Hall, 2004.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔