跳到主要內容

臺灣博碩士論文加值系統

(34.204.198.73) 您好!臺灣時間:2024/07/21 15:57
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳瑞屏
研究生(外文):Ruei-Ping Chen
論文名稱:以即時影像辨識為基礎之輪型機器人間之距離估測之研究
論文名稱(外文):The Research of Distance Estimation for Moving Car-Like Robots Based on Real-Time Image Recognition
指導教授:游文雄
指導教授(外文):Wen-Shyong Yu
學位類別:碩士
校院名稱:大同大學
系所名稱:電機工程學系(所)
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:英文
論文頁數:112
中文關鍵詞:影像辨識輪型機器人距離估測
外文關鍵詞:Image RecognitionCar-Like RobotDistance Estimation
相關次數:
  • 被引用被引用:1
  • 點閱點閱:804
  • 評分評分:
  • 下載下載:239
  • 收藏至我的研究室書目清單書目收藏:3
在本篇論文裡, 我們提出一演算法來分析兩部輪型機器人的影像, 使得其中一部輪型機器人能追隨另一部輪型機器人的軌跡。我們使用兩部輪型機器人, 分別為X80及i90機器人。當輪型機器人(i90) 以自身位置當做座標中心點,以單眼攝影機取得X80機器人的即時影像, 做二值化處理, 經過膨脹及侵蝕的處理來消除背景的雜訊使得二值化影像中只存在X80機器人的白色像素區塊, 希望藉此準確估測出另一部輪型機器人(X80) 的實際所在位置之座標值, 並達成兩部機器人軌跡追隨。首先, 根據實驗數據, 即時影像中的X80機器人的背面與側面影像面積因有些微的比例差距, 所以我們以X80機器人影像的車寬做數條分割線並編號, 比對前後兩到三張影像中的分割線號碼差值即可獲得X80機器人當下的行動狀態(例如: 前進、左右移動、靜止), 並獲知X80機器人影像是否為其背面或側面。接著, 再以背面面積所測到的兩車間距離為基準, 根據兩車間不同距離的白色像素區塊面積計算出X80機器人影像的寬, 以最小平方誤差近似出一條兩車間距離對應於X80機器人白色像素影像的寬之曲線, 當判斷出影像為X80機器人側面時, 乘上一側面相對應於背面的面積比例,再利用此一近似曲線求得初步的估測距離, 並利用六十組影像的平均值去修正誤差, 如此即可獲得較準確的修正距離。另一方面, 由於i90攝影機擁有一固定視角, 當已知視角及兩車間修正距離, 可利用三角定理計算出X80機器人在x 軸方向(垂直於i90機器人行走方向) 的估測座標值。由於i90攝影機在每次重新開機後的初始位置不盡相同, 因此,i90攝影機初始角度位置對X80機器人在x 軸之估測座標值會產生誤差, 故需於每次i90機器人開始動作後便需估測攝影鏡頭初始角度對x軸座標值, 以此x 軸初始座標值做為往後每次估測X80機器人x 軸估測座標值的修正量, 如此可得到修正後之兩車間距離與X80機器人的x 軸座標值。最後本論文將以實作來驗證我們所提出方法的準確性與有效性。
In this thesis, an algorithm is proposed for analyzing the image of two carlike
robots for one of them to track the other. We use X80 and i90 robots for
experiments. The image of the X80 is captured by a single eye camera of the
i90 robot, and then, it is binarized and eroded to get the simple binary image by filtering the noisy of the environment, and obtain the correct area of the X80 robot binary image. Because the area ratios of the back and side images of the X80 robot are different in the same distance, the theory of the divided lines is proposed to determine whether the captured image is back or side of the X80 robot. After several different distance experiments between these two robots, an approximation curve is obtained for estimating distances via least squares errors according to the image area widths of the X80 robot. On the other hand, when the estimated distance and the fixed visual angle of the i90 camera are known, the x-coordinate of the X80 robot can be obtained by the triangle theorem. Finally, the accuracy and validity of our proposed algorithm are verified by experiments.
ACKNOWLEDGEMENTS i
ABSTRACT (IN ENGLISH) ii
ABSTRACT (IN CHINESE) iii
TABLES OF CONTENTS v
LIST OF FIGURES x
1 INTRODUCTION 1
2 HARDWARE OF X80 AND i90 ROBOTS 5
2.1 X80 Robot 5
2.2 i90 Robot 6
3 IMAGE COLOR PROCESSING 9
3.1 Color Description 9
3.2 Image Binarization 11
3.3 Filtering the Noisy Pixels 16
3.3.1 Dilation 16
3.3.2 Erosion 18
4 PREPROCESSING OF THE VISUAL RECOGNITION 20
4.1 Conditions for Useful Image 20
4.1.1 Situation 1 21
4.1.2 Situation 2 23
4.1.3 Situation 3 24
4.2 Measurement of the X-coordinate Value of the Image Center 26
4.3 The X80 Robot’s Dynamic Condition 27
4.3.1 Methods for finding Dynamic Conditions 27
4.3.2 Going Straight orMoving Right 32
4.3.3 Static orMoving Left 36
4.3.4 Ratios of the Back to the Side Views of the X80 Robot 52
5 METHODOLOGY FOR DISTANCE AND WIDTH MEASUREMENTS
BETWEEN THE i90 AND THE X80 ROBOTS 56
5.1 Distance EstimationMeasurement 56
5.2 Measurement of the ActualWidth for the X80 Robot Image 81
5.2.1 Experimental 1 91
5.2.2 Experimental 2 96
6 CONCLUSIONS 108
REFERENCES 109
[1] M.-C. Lu, W.-Y. Wang, and C.-Y. Chu, “Image-Based Distance and Area Measuring Systems, ” IEEE Sensors Journal, vol. 6, no. 2, pp. 495-503, April 2006.
[2] C.-C. Hsu, M.-C. Lu, W.-Y. Wang, and Y.-Y. Lu, “Three-dimensional measurement of distant objects based on laser-projected CCD images, ” IET Sci.
Meas. Technol., vol. 3, Iss. 3, pp. 197-207, 2009.
[3] K. Tsiakmakis, B.P. Jordi , P.-V. Manel ,and T. Laopoulos, “A Camera Based
Method for the Measurement of Motion Parameters of IPMC Actuators, ”IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 8, pp.2626-2633, August 2009.
[4] P.-S. Tsai, L.-S. Wang, and F.-R. Chang, “Nighttime Vehicle Distance Measuring Systems, ” IEEE Transactions on Circuits and Systems, vol. 54, no. 1, pp.81-85, January 2007.
[5] J. Yu and J. Amores, N. Sebe, P. Radeva, and Q. Tian, “Distance Learning for Similarity Estimation, ” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 30, no. 3, pp. 451-462, March 2008.
[6] M.S. Stefanou and J.P. Kerekes, “Image-Derived Prediction of Spectral Image Utility for Target Detection Applications, ” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 4, pp. 1827-1833, April 2010.
[7] A. Goto and H. Fujimoto, “Proposal of 6 DOF Visual Servoing for Moving
Object Based on Real-Time Distance Identification, ” SICE Annual Conference
2008, pp. 3209-3213, August 20-22, 2008.
[8] M. Shibata and N. Kobayashi, “Image-based visual tracking for moving targets with active stereo vision robot, ” SICE-ICASE International Joint Conference 2006, Bexco, Busan, Korea, pp. 5329-5334, Oct. 18-21, 2006.
[9] Y. Lu, S. Payandeh, “Cooperative Hybrid Multi-camera Tracking for People
Surveillance, ” IEEE CCECE/CCGEI, Niagara Falls. Canada, pp. 1365-1368,
May 5-7 2008.
[10] B. Williams, P. Smith, and I. Reid, “Automatic Relocalisation for a Single-Camera Simultaneous Localisation and Mapping System, ” 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 2784-2790,April 10-14, 2007.
[11] C. Premebida, G. Monteiro, U. Nunes, and P. Peixoto “A Lidar and Visionbased Approach for Pedestrian and Vehicle Detection and Tracking, ” Proceedings of the 2007 IEEE, Intelligent Transportation Systems Conference, Seattle,USA, pp. 1044-1049, Sept. 30–Oct. 3, 2007.
[12] C.-L. Hwang and C.-Y. Shih, “A Distributed Active-Vision Network-Space Approach for the Navigation of a Car-Like Wheeled Robot, ” IEEE Transactions
on Industrial Electronics, vol. 56, no. 3, pp. 846-855, March 2009.
[13] J.-M. Guo and Y.-F. Liu, “License Plate Localization and Character Segmentation With Feedback Self-Learning and Hybrid Binarization Techniques, ” IEEE Transactions on Vehicular Technology, vol. 57, no. 3, pp. 1417-1424, May 2008.
[14] C. Nikolaos E. Anagnostopoulos, I.E. Anagnostopoulos, V. Loumos, and E.
Kayafas, “A License Plate-Recognition Algorithm for Intelligent Transportation
System Applications, ” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 3, pp. 377-392, September 2006.
[15] H. Caner, H.S. Gecim, and A.Z. Alkar, “Efficient Embedded Neural-Network-
Based License Plate Recognition System, ” IEEE Transactions on Vehicular
Technology, vol. 57, no. 5, pp. 2675-2683, September 2008.
[16] A. Rae and O. Basir, “A Framework for Visual Position Estimation for Motor Vehicles, ” 4th Workshop on Positioning, Navigation And Communication,
Hannover, Germany, pp. 223-228, 2007.
[17] C.-Yi Tsai, K.-T. Song, X. Dutoit, H.V. Brussel, and M. Nuttin, “Robust Mobile Robot Visual Tracking Control System Using Self-Tuning Kalman Filter, ”
IEEE International Symposium on Computational Intelligence in Robotics and
Automation Jacksonville, FL, USA, pp. 161-166, June 20-23, 2007.
[18] W. Bing and L. Xiang, “A Simulation Research on 3D Visual Servoing Robot
Tracking and Grasping a Moving Object, ” 15th International conference on
Mechatronics and Machine Vision in Practice, Auckland, New-Zealand, pp.
362-367, 2-4 December 2008.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top