跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.173) 您好!臺灣時間:2024/12/10 10:08
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林啟賢
研究生(外文):Chi-Shian Lin
論文名稱:基於地圖之室內自走車視覺導航研究
論文名稱(外文):Study on Map-Based Indoor Mobile Robot Vision Navigation
指導教授:鄭銘揚鄭銘揚引用關係
指導教授(外文):Ming-Yang Cheng
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:中文
論文頁數:86
中文關鍵詞:漸進式定位自走車地圖式視覺導航
外文關鍵詞:incremental localizationautonomous mobile robotmap-based vision navigation
相關次數:
  • 被引用被引用:1
  • 點閱點閱:423
  • 評分評分:
  • 下載下載:64
  • 收藏至我的研究室書目清單書目收藏:0
本文主旨在於實現一能於大樓走廊中進行地圖式視覺導航之全自
主式自走車,整個系統只使用攝影機作為感測器來擷取外在環境資訊。
一般而言,地圖式導航系統包含三個步驟: 建立地圖、定位以及路徑規
劃等,本文採用人工的方式建立一拓撲式地圖來表示整個研究環境,並
且設定自走車的行駛路徑如巡邏車一般不停地環繞著走廊。由於本文的
導航系統之設計與走廊天花板之影像息息相關,所以利用了天花板許多
的特徵諸如明顯的線條、電燈以及轉角特徵等使系統可透過影像進行定
向和定位,並且設計一運動控制策略使自走車能在走廊中行駛。此外,
由於天花板影像經常會因光源的變化而導致方位判斷上錯誤,為了克服
光源變化的問題,本文提出以適應性閥值提高系統對環境的適應性與強
健性。實驗結果顯示本文所設計之導航系統,確實可令自走車成功地依
據已規劃好的路徑行駛。
This thesis focuses on the implementation of an autonomous mobile robot moving along the corridors in a building by map-based vision navigation, where a camera is the only sensor in the navigation system to gather environment information. Generally speaking, map-based navigation consists of three steps: Map-building, localization, and path planning. In our system, the map and path planning are highly related to the ceiling images, and those are defined by user. This thesis constructs a topological map to represent the environment in advance. The constructed map is used to help the autonomous mobile robot to move along the corridor like as a patrol robot. A webcam mounted on the mobile robot is used to capture the ceiling images to analyze the features such as distinctive line, ceiling light and corner features. According to the obtained feature and the pre-constructed topological map, the mobile robot can perform localization. A proper moving strategy is designed to guide the mobile robot to successfully move along the corridor. Since the ceiling image is sensitive to the fluctuation in illumination, therefore variation in illumination may result in localization failure. In order to cope with this problem, an adaptive threshold technique is employed to adjust parameter values used in the navigation system. Experimental results show that our autonomous mobile robot successfully moves along the desired path.
中文摘要 I
Abstract II
誌謝 III
目錄 IV
表目錄 VI
圖目錄 VII
第 1 章 緒論 1
1.1 研究動機與目的 1
1.2 文獻回顧 3
1.3 本文架構 6
第 2 章 場景介紹與分析 7
第 3 章 方向估測系統 12
3.1 影像前置處理 13
3.1.1 直方圖等化 13
3.1.2 中值濾波器 16
3.1.3 影像前置處理對邊緣檢測之影響 17
3.2 線段參數搜尋 18
3.2.1 邊緣檢測 19
3.2.2 霍夫轉換 20
3.2.3 群集檢測 23
3.2.4 平行正交性檢測 25
3.3 車頭方向估測 26
3.4 適應性閥值更新 29
第 4 章 定位系統 33
4.1 橫向位置估測系統 34
4.1.1 導引線與橫向位置之幾何關係 34
4.1.2 導引線參數的搜尋 35
4.2 電燈定位系統 39
4.2.1 電燈辨識 40
4.2.2 直方圖比對 43
4.2.3 可靠度量測與狀態變化 46
4.2.4 位置更新 47
4.3 轉角特徵定位系統 50
4.3.1 轉角特徵辨識 51
4.3.2 可靠度量測與狀態變化 53
第 5 章 自走車視覺導航系統之設計與實現 56
5.1 移動控制策略 57
5.2 硬體設備 60
5.3 人機介面 62
5.3.1 利用電燈定位的人機介面 63
5.3.2 利用轉角特徵定位的人機介面 64
5.4 實驗與結果 65
5.4.1 L形轉彎實驗 66
5.4.2 直線移動實驗 67
5.4.3 橫向位置偏移實驗 68
5.4.4 利用電燈定位之導航實驗 69
5.4.5 利用轉角特徵定位之導航實驗 73
第 6 章 結論與建議 80
6.1 結論 80
6.2 未來研究建議 81
參考文獻 82
自述 86
[1]W. Burgard, A. B. cremers, D. Fox, D. Hahnel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, “Experiences with an interactive museum tour-guide robot,” Artificial Intelligence, pp. 3-55, 1999.
[2]NASA Jet Propulsion Laboratory, “Mar Exploration Rover Mission,” http://marsrovers.jpl.nasa.gov/overview/
[3]G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237-267, 2002.
[4]D. Filliat and J. A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,” Cognitive Systems Research, vol. 4, pp. 243-282, 2003.
[5]J. A. Meyer and D. Filliat, “Map-based Navigation in Mobile Robots: II. A review of Map-Learning and Path-Planning Strategies,” Cognitive Systems Research, vol. 4, pp. 283-317, 2003.
[6]F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 1322-1328,1999.
[7]A. Kosaka and A. Kak, “Fast Vision-Guided Mobile Robot Navigation Using Model-based Reasoning and Prediction of Uncertainties,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2177-2186, 1992.
[8]S. Thrun, “Bayesian Landmark Learning for Mobile Robot Localization,” Machine Learning, vol. 33, no. 1, pp. 41-76, 1998.
[9]S. Thrun, “Finding Landmarks for Mobile Robot Navigation,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 958-963, 1998.
[10]S. Thrun, M. Bennewitz, W. Burgard, A. B. cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, “MINERVA: A Tour-Guide Robot That Learns,” Lecture Notes in Computer Science, pp. 14-29, 1999.
[11]F. Dellaert, W. Burgard, D. Fox, S. Thrun, “Using the CONDENSATION Algorithm for Robust, Vision-based Mobile Robot Localization” in Proceedings of the IEEE Computer Society Conference, pp. 588-594, 1999.
[12]M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem” in Proceedings of American Association on Artificial Intelligence, pp. 593-598, 2002.
[13]S. King and C. Weiman, “Helpmate Autonomous Mobile Robot Navigation System,” in Proceedings of the SPIE Conference on Mobile Robots, vol. 2352, pp. 190-198, 1990.

[14]S. Koening and R. Simmons, “Passive Distance Learning for Robot Navigation,” in Proceedings of the 13th International Conference on Machine Learning, pp. 266-274, 1996.
[15]M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges,” in proceedings of International Joint Conference on Artificial Intelligence, 2003.
[16]J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous Map Building and Localization for an Autonomous Mobile Robot,” IEEE/RSJ International Workshop on IROS, pp. 1442-1447, 1991.
[17]M. Dissanayake, P. Newman, S. Clark, and H. F. Durrant-Whyte “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 229-241, 2001.
[18]J. Folkesson, P. Jensfelt, and H. I. Christensen, “Vision SLAM in the Measurement Subspace,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 30-35, 2005.
[19]R. Sim, P. Elinas, M. Griffin, and J. J. Little, “Vision-based SLAM using the Rao-Blackwellised Particle Filter,” IJCAI Workshop on Reasoning with Uncertainty in Robotics, 2005.
[20] D. X. Nguyen, B. J. You, and S. R. Oh, “A Simple Landmark Model for Vision-based Simultaneous Localization and Mapping,” in SICE-ICASE International Joint Conference, pp. 5016-5021, 2006.
[21] T. Bailey and H. Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM):Part I The Essential Algorithms,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.
[22] J. Kim, K. J. Yoon, J. S. Kim, and I. Kweon, “Visual SLAM by Single-Camera Catadioptric Stereo,” in SICE-ICASE International Joint Conference, pp. 2005-2009, 2006.
[23] T. Lemaire and S. Lacroix, “Monocular-vision based SLAM using Line Segments,” IEEE International Conference on Robotics and Automation, pp. 2791-2796, 2007.
[24] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1-16, 2007.
[25] L. F. Gao, Y. X. Gai, and S. Fu, “Simultaneous Localization and Mapping for Autonomous Mobile Robots Using Binocular Stereo Vision System,” in Proceedings of the 2007 IEEE International Conference on Mechatronics and Automation, pp. 326-330, 2007.
[26] H. Liu, L. Gao, Y. Gai, and S. Fu, “Simultaneous Localization and Mapping for Mobile Robots Using Sonar Range Finder and Monocular Vision,” in Proceedings of the IEEE International Conference on Automation and Logistics, pp. 1602-1607, 2007.
[27] T. Lemaire and S. Lacroix, “SLAM with Panoramic Vision,” Journal of Field Robotics, vol. 24, pp. 91–111, 2007.
[28] T. Lemaire, C. Berger, I. K. Jung, and S. Lacroix, “Vision-Based SLAM: Stereo and Monocular Approaches,” International Journal of Computer Vision, pp. 343–364, 2007.
[29] P. Yang, W. Wu, M. Moniri, and C.C. Chibelushi, “A Sensor-based SLAM Algorithm for Camera Tracking in Virtual Studio,” International Journal of Automation and Computing, pp. 152-162, 2008.
[30] K. Celik, S. J. Chung, and A. Somani, “Mono-Vision Corner SLAM for Indoor Navigation,” in Proceedings of the IEEE International Conference on Electro/Information, pp. 343-348, 2008.
[31] S. Kim and S. Y. Oh, “SLAM in Indoor Environments using Omni-directional Vertical and Horizontal Line Features,” Journal of Intelligent and Robotic Systems, vol. 51, no. 1, pp. 31-43, 2008.
[32] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 83-88, 1996.
[33]S. D. Jones, C. Andresen, and J. L. Crowley, “Appearance based process for visual navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 551-557, 1997.
[34]T. Ohno, A. Ohya, and S. Yuta, “Autonomous navigation for mobile robots referring pre-recorded image sequence,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 672-679, 1996.
[35]J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent stereo for robot navigation: learning from bees,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 434-439, 1993.
[36]程雋,應用機率論,文笙。
[37]R. C. Gonzalez and R. E. Woods, Digital image processing, Prentice Hall, 2002.
[38]Z. Xiang and G. Joy, “Color Image Quantization by Agglomerative Clustering,” IEEE Computer Graphics and Applications, vol. 14, no. 3, pp. 44-48, 1994.
[39] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data Clustering: A Review,” ACM computing surveys, vol. 31, no. 3, pp. 264-323, 1999.
[40]Z. Hu, F. Lamosa, and K. Uchimura, “A Complete UV-disparity Study for Stereovision Based 3D Driving Environment Analysis,” in Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, pp. 204-211, 2005.
[41]王俊凱, “基於改良式適應性背景相減法與多重影像特徵比對之多功能即時視覺追蹤系統之設計與實現,” 碩士論文,國立成功大學電機工程學系,2004。
[42]蘇助彬, “基於視覺之移動目標物分類與人體動作分析研究,” 碩士論文,國立成功大學電機工程學系,2007。
[43]許益彰, “室內自走車之視覺導航研究,” 碩士論文,國立成功大學電機工程學系,2008。
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top