(3.236.118.225) 您好!臺灣時間:2021/05/14 13:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:黃文采
研究生(外文):Wen-Tsai Huang
論文名稱:利用影像及深度資訊之移動平台自主定位技術
論文名稱(外文):Vision Based Techniques for Mobile Robot Self-Localization
指導教授:林惠勇
指導教授(外文):Huei-Yung Lin
口試委員:張勤振吳俊霖林維暘
口試委員(外文):Chin-Chen ChangJiunn-Lin WuWei-Yang Lin
口試日期:中華民國一百年七月二十六日
學位類別:碩士
校院名稱:國立中正大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:54
中文關鍵詞:標示物全景相機位移估測垂直線偵測彩色深度攝影機自主定位機器人視覺
外文關鍵詞:MarkerOmnidirectional cameraMotion estimationVertical line detectionSparse bundle adjustmentRGB-D cameraSelf-LocalizationRobot vision
相關次數:
  • 被引用被引用:0
  • 點閱點閱:424
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:18
  • 收藏至我的研究室書目清單書目收藏:1
本論文中,我們提出三種不同的技術來進行機器人的自主定位,能夠讓機器人在一個未知的環境下,知道自己如何移動。第一,我們在環境中擺設標示物,辨識相機擷取到的影像中特殊的標示物,然後計算偵測到的標示物之相對位置來進行機器人定位。第二,我們結合深度影像以及彩色影像,然後擷取彩色影像的特徵對應到深度影像上的資訊,並且經由點雲套合演算法,計算出相鄰兩個時間點之下的兩群點雲的位移,知道該兩組點的位移即可知道機器人自己的位移。第三,我們使用了全方位視覺系統,它包含一架普通的相機以及反射折射鏡組成,利用它的幾何特性,搜尋擷取到的影像裡在空間中的垂直線,並找到與地面的交點,然後透過相機模型投影到真實平面後,計算兩個不同位置下這些交點的位移進行自主定位。最後我們會討論三種技術的比較以及優缺點,並期望能夠結合這三種的優點,得到一個更好的結果。
In this thesis, we propose three different techniques for robot self-localization to
let a robot know how it moves in an unknown environment. First, we set up some
markers in the environment, recognize the markers in image, and then calculate the
relative position to the robot localization. Second, we use an omnidirectional camera
for self-localization. By extracting the vertical lines in images and calculating the
transformation between these intersection point of the vertical lines, we can know
how the robot moves. Third, we combine the depth and color images, extract feature
point in color image, and computing the registration of two groups of point cloud
to know the robot’s move. Finally, we will compare and discuss the advantages and
disadvantages of the three technologies, and expect to combine the advantages of
these three for getting a better result.
目錄
摘要
圖目錄
表目錄
中英文字對照
1 緒論
1.1 研究動機
1.2 相關研究
1.3 論文架構
2 偵測特定標示物之視覺定位系統
2.1 系統架構
2.1.1 標示物圖形樣式
2.2 標示物偵測
2.3 DSP 嵌入式系統硬體實作
3 全方位視覺定位系統
3.1 相機模型
3.1.1 全方位影像系統的投影模型
3.1.2 校正流程
3.2 影像特徵擷取與計算
3.2.1 地面資訊擷取
3.2.2 空間中垂直線擷取
3.2.3 地面點資訊比對
3.3 位移估測與分析
3.3.1 SBA(Sparse Bundle Adjustment) 校正
4 RGB-D 彩色深度攝影機之定位系統
4.1 系統架構
4.2 疊代鄰近點(Iterative Closest Point) 演算法
4.3 特徵擷取
5 實驗結果
5.1 偵測特定標示物之視覺定位系統
5.2 全方位視覺定位系統
5.3 RGB-D 彩色深度攝影機之定位系統
5.4 結果比較與討論
6 結論
參考文獻
[1] D. G. R. Bradski and A. Kaehler, Learning opencv, 1st edition. O’Reilly Media,
Inc., first ed., 2008.
[2] C. Mei and P. Rives, “Single view point omnidirectional camera calibration from
planar grids,” in Robotics and Automation, 2007 IEEE International Conference
on, pp. 3945 –3950, april 2007.
[3] M. S. Grewal and A. P. Andrews, Kalman Filtering : Theory and Practice.
Wiley-Interscience, 2 ed., Jan. 2001.
[4] Kalman, Rudolph, and Emil, “A new approach to linear filtering and prediction
problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82,
no. Series D, pp. 35–45, 1960.
[5] R. Negenborn, “Robot localization and kalman filters,” 2003.
[6] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,”
Int. J. Comput. Vision, vol. 13, pp. 119–152, October 1994.
[7] J. Salvi, C. Matabosch, D. Fofi, and J. Forest, “A review of recent range image
registration methods with accuracy evaluation,” Image Vision Comput., vol. 25,
pp. 578–596, May 2007.
[8] A. Rituerto, L. Puig, and J. J. Guerrero, “Visual slam with an omnidirectional
camera,” in Proceedings of the 2010 20th International Conference on Pattern
Recognition, ICPR ’10, (Washington, DC, USA), pp. 348–351, IEEE Computer
Society, 2010.
[9] S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,”
Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, 2004.
[10] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “Rgb-d mapping : Using
depth cameras for dense 3d modeling of indoor environments,” In RGB-D:
Advanced Reasoning with Depth Cameras Workshop in conjunction with RSS,
vol. 1, no. c, pp. 9–10, 2010.
[11] V. Castaneda, D. Mateus, and N. Navab, “Slam combining tof and highresolution
cameras,” in Proceedings of the 2011 IEEE Workshop on Applications
of Computer Vision (WACV), WACV ’11, (Washington, DC, USA), pp. 672–
678, IEEE Computer Society, 2011.
[12] F. Fraundorfer, C. Wu, and M. Pollefeys, “Combining monocular and stereo cues
for mobile robot localization using visual words,” in Proceedings of the 2010 20th
International Conference on Pattern Recognition, ICPR ’10, (Washington, DC,
USA), pp. 3927–3930, IEEE Computer Society, 2010.
[13] D. Chen and G. Zhang, “A new sub-pixel detector for x-corners in camera calibration
targets,” in International Conference in Central Europe on Computer
Graphics and Visualization, pp. 97–100, 2005.
[14] C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems
and practical applications,” in Proceedings of the 6th European Conference on
Computer Vision-Part II, ECCV ’00, (London, UK), pp. 445–461, Springer-
Verlag, 2000.
[15] J. Barreto and H. Araujo, “Issues on the geometry of central catadioptric image
formation,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001.
Proceedings of the 2001 IEEE Computer Society Conference on, vol. 2, pp. II–
422 – II–427 vol.2, 2001.
[16] M. Wang, Y. Chung, and H. Lin, “A self-localization technique for mobile robots
using image-based ground plane detection,” in Proceedings of the 2009 International
Conference on Service and Interactive Robotics, SIRCon ’09, 2009.
[17] D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings
of the International Conference on Computer Vision-Volume 2 - Volume
2, ICCV ’99, (Washington, DC, USA), pp. 1150–1157, IEEE Computer Society,
1999.
[18] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features
(surf),” Comput. Vis. Image Underst., vol. 110, pp. 346–359, June 2008.
[19] A. Murillo, J. Guerrero, and C. Sagues, “Surf features for efficient robot localization
with omnidirectional images,” in Robotics and Automation, 2007 IEEE
International Conference on, pp. 3901 –3907, april 2007.
[20] H. Bay, B. Fasel, and L. V. Gool, “Interactive museum guide: Fast and robust
recognition of museum objects,” in Proceedings of the first international workshop
on mobile vision, May 2006.
[21] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings
of the 4th Alvey Vision Conference, p. 147 – 151, 1988.
[22] L. Juan and O. Gwon, “A comparison of SIFT, PCA-SIFT and SURF,” International
Journal of Image Processing (IJIP), vol. 3, no. 4, pp. 143–152, 2009.
[23] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for
model fitting with applications to image analysis and automated cartography,”
Commun. ACM, vol. 24, pp. 381–395, June 1981.
[24] A. Pronobis and B. Caputo, “COLD: COsy Localization Database,” The International
Journal of Robotics Research (IJRR), vol. 28, May 2009.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔