跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.134) 您好!臺灣時間:2025/11/20 19:15
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:李忠霖
研究生(外文):Jung-Lin Li
論文名稱:基於局部化尺度不變特徵轉換比對方法的立體視覺導航技術及其Nao嵌入式系統實作
論文名稱(外文):Stereo Visual Navigation Based on Local Scale-Invariant Feature Transform and Its Nao Embedded System Implementation
指導教授:何前程
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電機工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:中文
論文頁數:57
中文關鍵詞:立體視覺導航尺度不變特徵轉換比對方法
外文關鍵詞:stereo vision navigationScale-Invariant Feature Transform (SIFT)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:282
  • 評分評分:
  • 下載下載:14
  • 收藏至我的研究室書目清單書目收藏:1
立體視覺導航技術是智慧型機器人必備的基本功能,可以順利地使智慧型機器人實現避障防撞、路徑規劃、地圖繪製以及環境定位的功能。,然而,傳統的特徵擷取方法無法提供單一畫面中大量足夠且平均分佈的特徵點來實現後續的立體視覺導航技術。因此,往往需要一些額外的超音波或紅外線感測器來做輔助。
在本論文中,我們提出了一個局部化尺度不變特徵轉換(Scale-Invariant Feature Transform)比對的方法,可以大量地增加特徵點的數量,且平均分佈於單一畫面中,以便獲得精確的三維空間資訊與繪製出細緻的立體地圖來實現智慧型機器人的立體視覺導航應用。實驗結果證明本論文所提出的局部化尺度不變特徵轉換比對方法可以偵測到比較多且可靠有用的特徵點資訊。另一方面,本論文也實作了基於灰階直方圖分割的簡化版視覺導航技術於Nao嵌入式機器人上。實作結果顯示基於灰階直方圖統計的簡化版立體視覺導航技術是簡單且有效率的。
Stereo vision navigation is the fundamental functionality of the intelligent robot, so that the intelligent robot can smoothly achieve the features of obstacle avoidance, path planning, map building, and environmental localization. , However, conventional feature detection methods can not provide plenty of feature points that are distributed evenly and can not accomplish the stereo vision navigation. Meanwhile, the intelligent robot often requires some extra ultrasonic or infrared sensor for assistance.
In this thesis, Local Scale-Invariant Feature Transform (SIFT) method is proposed to get more and evenly feature points. So accurate 3D environment modeling and elaborate stereo map can be accomplished easily. Experimental results verify the proposed Local SIFT can detect more and reliable feature points. On the other hand, this thesis also implements the simplified stereo vision navigation based on grayscale histogram segmentation onto Nao embedded robot. Implementation results show the simplified vision navigation based on grayscale histogram analysis is simple and efficient.
摘 要 II
ABSTRACT IV
誌謝 V
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 3
1.3 機器人視覺導航流程 3
1.4 論文架構 5
第二章 傳統特徵擷取方法相關研究 6
2.1 SUSAN CORNER DETECTOR 6
2.2 HARRIS CORNER DETECTOR 8
2.3 SPEEDED UP ROBUST FEATURE 10
2.4 SCALE-INVARIANT FEATURE TRANSFORM 11
2.4.1 尺度空間的極值偵測 11
2.4.2 特徵點的定位 13
2.4.3 方向的分配 14
2.4.4 特徵點描述 14
第三章 基於局部區域尺度不變特徵轉換比對方法 17
3.1 系統架構 17
3.2 透視投影 17
3.3 局部區域尺度不變特徵轉換方法 19
3.4 特徵點之匹配 22
3.5 立體視覺演算法 25
第四章 實驗分析 28
4.1 實驗架構 28
4.2 攝影機之校正 28
4.3 實驗結果 29
4.4 程式最佳化 32
第五章 NAO嵌入式系統實作 35
5.1 NAO嵌入式系統平台的硬體架構 35
5.2 NAO嵌入式系統平台的軟體架構 36
5.3 NAO嵌入式作業系統機器人實作 39
5.4 實作結果 43
第六章 結論與未來展望 44
[1]F Bonin-Font, A Ortiz, G Oliver, “Visual Navigation for Mobile Robots: A Survey,” Journal of Intelligent and Robot System, vol. 53, pp. 263–296, Nov. 2008.
[2]T. Lindeberg, “Feature detection with automatic scale selection,” International Journal of Computer Vision, vol. 30, pp. 77–116, 1998.
[3]T Gevers, “Combining color and shape invariant features for image retrieval.” IEEE transactions on Image Processing, vol. 9, pp. 102–119, Jan 2000.
[4]T. Lindeberg, “Detecting Salient Blob-Like Image Structures and Their Scales with a Scale-Space Primal Sketch: A Method for Focus-of-Attention,” International Journal of Computer Vision, vol. 11, pp. 283–318, 1993.
[5]F Remondino, “Detectors and descriptors for photogrammetric applications,” Proceedings of the International Conference on Visual Information Engineering, pp. 290–293, 2003.
[6]Jorge Lobo and Jorge Dias, “Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp.1597–1608, Dec. 2003.
[7]Andrew J. Davison and David W. Murray, “Simultaneous Localization and Map-Building Using Active Vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 865–880, Jul. 2002.
[8]Stephen M. Smith and J. Michael Brady, “SUSAN—A New Approach to Low Level Image Processing,” International Journal of Computer Vision, vol. 23, pp. 45–78, May 1995.
[9]C. Harris and M. Stephens, “A combined corner and edge detector,” Proceedings of 4th ALVEY vision conference, pp. 147–151, 1988.
[10]H. P. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover,” Proceedings of the 7th International Joint Conference on Artificial Intelligence, pp. 785–790, Sep. 1980.
[11]
[12]David G. Lowe, “Object recognition from local scale-invariant features,” Proceedings of the International Conference on Computer Vision, vol. 2, pp. 1150–1157, Sep. 1999.
[13]David G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, Jan. 2004.
[14]R. Hartley, A. Zisserman, “Estimation - 2D Projective Transformations,” Multiple View Geometry in Computer Vision, Cambridge University, pp. 87–183, 2000.
[15]Intel, “OpenCV Wiki,” [Online]. Available: http://opencv.willowgarage.com/wiki/, 2003.
[16]Jean-Yves Bouguet, “Camera Calibration Toolbox for Matlab,” [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/, 2008.
[17]Aldebaran Robotics, “Nao Academics datasheet,” [Online]. Available: http://www.aldebaran-robotics.com/Files/DSV3_En.pdf, 2009.
[18]JM. Maja, T. Takahashi, Z.D. Wang, E. Nakano, “Real-Time Obstacle Avoidance Algorithm for Visual Navigation” Proceedings of the International Conference on Intelligent Robots and Systems, vol. 2, pp. 925–930, 2000.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文