跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.106) 您好!臺灣時間:2026/04/05 02:29
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳禹旗
研究生(外文):Yu-Chi Chen
論文名稱:使用3D視覺資訊偵測道路和障礙物應用於人工智慧策略之室外自動車導航
論文名稱(外文):Road and Obstacle Detection Using 3D Vision Information Applied to Outdoor Guidance of Autonomous Land Vehicle by Artificial Intelligent Policy
指導教授:駱榮欽駱榮欽引用關係
指導教授(外文):Rong-Chin LO
口試委員:楊靖宇蕭耀榮王振興
口試委員(外文):Ching-Yu YangYao-Jung ShiaoJenn-Shing Wang
口試日期:2006-07-24
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電腦與通訊研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:英文
論文頁數:89
中文關鍵詞:電腦視覺自動車攝影機校正特徵點萃取影像點對應3D 重建
外文關鍵詞:Autonomous Land VehicleComputer VisionCamera CalibrationFeature Point ExtractionStereo Correspondence3D Reconstruction.
相關次數:
  • 被引用被引用:3
  • 點閱點閱:286
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
在本篇論文中,我們提出一套以特徵點萃取、影像點對應及立體電腦視覺技術為基礎來得到前方場景的3D資訊為系統主架構。本系統主架構是利用兩個CCD所拍攝到的影像來計算目前場景的3D資訊,並藉由場景的3D資訊可以推測出目前在車子的前方是否有障礙物的存在,也可以推測出障礙物離車子的距離和方位,進而再搭配上人工智慧的導航策略,使得自動車是可以利用3D資訊來瞭解前方的環境進而做導航和避碰。
在利用立體電腦視覺開始前,攝影機校正是必要的,我們利用特定8點的3D點及投射到左右攝影機中之影像點以最小平方法來求得左右攝影機校正參數,之後我們可利用這些參數及左右影像點進行影像中景物之3D重建。由於我們利用立體視覺來取得環境的3D資訊,因此影像點對應是必要的,得到影像對應點進一歩得到對應點的3D資訊。影像對應點是一個相當重要且有待改善的問題,影像對應點將對場景的3D資訊會有著很大的影響,所以我們利用Harris Corner Detector去找出影像中較特殊的點,再利用一些幾何上的限制跟 fundamental matrix 搭配來尋找出一組最佳的對應點。在子目標特徵搜尋方面我們利用物形比對的方法,依據景物的長度、角度、面積及景物之間的間隔距離來萃取出特定景物,如軌道線及圍牆上的交叉線當作地圖,並依此做為是否到達子目標或目標位置的依據並往下一個子目標或目標前進或是下達停車指示。
當我們取得重建的3D資訊,了解車子前方場景的空間資訊後,就可得知障礙物與路面在空間中的位置並利用車子與子目標或目標之間的方向角規劃出最佳的導航路徑,使得車子可以安全繞過障礙物並在最佳路徑前提下朝向目標前進。
In the thesis, we have developed a system to obtain 3D reconstruction of the front scene equipped with feature point extraction, stereo correspondence and binocular stereovision system. The system uses two cameras to reconstruct the 3D structure of a scene. We can utilize the 3D information of the scene to understand the environment and determine the obstacles positions and orientations. Hence, The ALV can utilize the 3D information to navigate and obstacle avoidance in the outdoor environment using AI policy.
Before using binocular stereovision system, the camera calibration is necessary. We employ the linear least-square method to obtain calibration parameters of the left and the right cameras using eight known 3D points and image points projected from real world into cameras. Then we can reconstruct the 3D information by using the calibration parameters and the image points of two cameras. Due to we utilize stereovision to know the 3D information of the scene, the correspondence problem is the important and the most difficult problem of 3D reconstruction. The accuracy of stereovision correspondence will be affected greatly the 3D information. We use Harris corner detector to extract the feature points of the images. These feature points are candidates using the fundamental matrix and geometry constraints to look for the best correspondence points. In the subgoal and the goal searching, the string match approach is employed to find out them. In the navigation and path planning, we define some features such as length, angle, area and distance of the objects as a map. Following the map, we find the subgoal and the goal. In here the subgoal is a pair of rail lines and the goal is the cross lines on the wall beside the road. When they are found out, it indicates that the ALV has arrived at the position of the subgoal or the goal and runs toward the goal or stop.
After we derive the 3D structure of a scene, we can understand the environment and determine the obstacles positions and orientation. The direction between ALV and the subgoal or the goal is obtained by an E-compass. We employ an AI-based navigation method to obtain the angle where ALV has to turn and make ALV avoid the obstacle safely and run toward the subgoal or the goal in an appropriate path.
TABLE OF CONTENTS
摘 要 i
ABSTRACT iii
ACKNOWLEDGMENTS v
TABLE OF CONTENTS vi
LIST OF TABLES viii
LIST OF FIGURES ix
Chapter 1 INTRODUCTION 1
1.1 Research Motivation 1
1.2 Survey of Related Researches 2
1.2.1 Camera Calibration Methods 2
1.2.2 Stereo Corresponding Methods 2
1.2.3 Autonomous Vehicle Guidance Methods 3
1.3 Overview of Proposed Approaches 3
1.4 Thesis Organization 4
Chapter 2 HARDWARE ARCHITECTURE 7
2.1 Hardware Architecture Overview 7
2.2 Description of Motor Control System 9
2.3 Navigation Environment 10
Chapter 3 CAMERA CALIBRATION AND 3D RECONSTRUCTION 12
3.1 Camera Model 12
3.1.1 Transformation of World Coordinates into Camera Coordinates 12
3.1.2 Projection of CCS into Image plane 15
3.1.3 Transformation of Image Plane into Image Buffer 16
3.2 A Linear Camera Calibration Method 17
3.3 3D Reconstruction Using Linear Square Method 19
3.4 Experimental Results 19
Chapter 4 REGION OF INTEREST DETECTION 24
4.1 HSI Color Space Conversion 24
4.2 A Proposed Approach to Searching Region of Interest 26
4.2.1 ROI Extracting using SI Information 27
4.2.2 Point Voting 29
4.2.3 8-Neighbors Block Voting 30
4.3 A Proposed Approach To SubGoal Searching 31
4.3.1 Preprocessing 32
4.3.2 Shape-based String Match Approach 33
4.4 Sensor-like Points Approach 34
4.4.1 Sector Segmentation 35
4.4.2 Boundary Blocks Extraction 36
4.5 Feature-based Block Correspondence Analysis 38
4.5.1 Square Error 39
4.5.2 Feature-based Block Matching 39
4.6 Experimental Results 40
Chapter 5 STEREO CORRESPONDING 45
5.1 Basic Concepts of Static Stereo 45
5.1.1 Epipolar Geometry 45
5.1.2 Constraints 46
5.1.4 Cover Problem 49
5.1.5 Problems in Avoiding Collision 50
5.2 Harris Corner Detector 51
5.3 The Fundamental Matrix 55
5.3.1 8-Point Algorithm 58
5.4 Experimental Results 59
Chapter 6 ALV NAVIGATION 64
6.1 Traditional A* Search 64
6.2 The Local Map Constructing 65
6.3 AI-based navigation method 67
6.4 Experimental Results 73
Chapter 7 CONCLUSION AND FURTHER RESEARCH 78
7.1 Conclusion 78
7.1.1 ROI searching 78
7.1.2 The Feature Points Extraction 78
7.1.3 Stereo Correspondence Analysis 79
7.1.4 3D reconstruction 79
7.2 Further Research 79
REFERENCES 81
APPENDIX 84
Appendix A. Prove the Formula of Camera Calibration 85
A.1. Transformation of the Eqs (3.7) to (3.8) 85
A.2. Calculate the Camera Parameters 86
VITA 89
REFERENCES

[1] 鄭惟元,以灰色理論為基礎運用立體視覺做室內自動車導航之研究,國立台北科技大學機電整合研究所碩士論文,民國90年。
[2] 林俊佑,應用基因演算法於自動車的攝影機參數校正之研究,國立台北科技大學機電整合研究所碩士論文,民國89年。
[3] Zhengyou Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 50, no.1, Nov. 2003, pp.1330-1334.
[4] Changming Sun, “Fast Stereo Matching Using Rectangular Subregioning and 3D Maximum-Surface Techniques,” International Journal of Computer Vision, vol. 47, no.1/2/3, May 2002.
[5] Heiko Hirschmぴuller, “Improvements in Real-Time Correlation-Based Stereo Vision,” Published in Proceedings of IEEE Workshop on Stereo and Multi-Baseline Vision, Kauai, Hawaii, Dec. 2001, pp141-148.
[6] 劉明豐,以感測點為基礎應立體視覺做室外自動車導航,國立台北科技大學機電整合研究所碩士論文,民國91年。
[7] 張煜青,以雙眼立體電腦視覺配合人工智慧策略作室外自動車導航之研究,國立台北科技大學自動化科技研究所碩士論文,民國92年。
[8] Yajun Fang, Ichiro Masaki and Berthold Horn, “Depth-Based Target Segmentation for Intelligent Vehicles: Fusion of Radar and Binocular Stereo,” IEEE Transactions on Intelligent Transportation Systems, vol. 3, no.3, Sep. 2002.
[9] 謝銘倫,室內場景之特徵點擷取與追蹤,國立交通大學資訊科學研究所碩士論文,民國91年。
[10] 李昆霖,基本矩陣之最佳化與歐幾里德重建,國立交通大學資訊科學研究所碩士論文,民國90年。
[11] Fatih M. Porikli, Richard V. Kollarits, “Stereo Image Acquisition Display Specifications Accurate Depth Perception,”
[12] Chris Harris and Mike Stephens, “A Combined Corner and Edge Detector,” 4th Alvey Vision Conference, 1998, pp147-151.
[13] Richard Hartley, “In Defense of the 8-point Algorithm,” International Conference on Computer Vision (ICCV), 1995, pp. 1065-1070.
[14] Kato, T.; Ninomiya, Y.; Masaki, I., “An obstacle detection method by fusion of radar and motion stereo,” IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 3, NO. 3, SEPTEMBER 2002.
[15] Xueyin Lin and Shaoyun Chen,“Color image segmentation using modified HSI system for road following,” Proceedings of the 1991 IEEE International Conference on Robotics and Automation Sacramento, California – April 1991.
[16] Rafael C. Gonzalez and Richard E. Woods, “Digital Image Processing,” Addison Wesley, 1993.
[17] David A. Forsyth and Jean Ponce, “Computer Vision A Modern Approach,” Prentice Hall, 2003.
[18] Milan sonka, Vaclav Hlavac and Roger Boyle, “Image Processing Analysis, and Machine Vision,” PWS, 1998..
[19] Bruce N. Nelson, “Automatic Vehicle Detection in Infrared Imagery Using a Fuzzy Inference-Based Classification System,” IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO.1, FEBRUARY, 2001.
[20] Massimo Bertozzi, Albert0 Broggi, Alessandra Fascioli, “Stereo inverse perspective mapping: theory and applications,” Image and Vision Computing 16 (1998), pp. 585-590.
[21] Albert0 Broggi, Massimo Bertozzi, Alessandra Fascioli, Corrado Guarino Lo Bianco, Aurelio Piazzi, “Visual Perception of Obstacles and Vehicles for Platooning.” IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 1, NO. 3, SEPTEMBER 2000.
[22] Jianbo Shi , Carlo Tomasi, “Good Features to Track,” IEEE Conference on Computer Vision and Pattern Recognition(CVPR94)Seattle, June 1994.
[23] U. Franke , S. Heinrich, “Fast Obstacle Detection for Urban Traffic Situations, ” IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 3, NO. 3, SEPTEMBER 2002.
[24] C. Curio, J. Edelbrunner, T. Kalinke, C. Tzomakas, and W. von Seelen,“Walking pedestrian recognition,” IEEE Trans. Intell. Transport. Syst.(Special Issue), vol. 1, pp. 155–163, Sept. 2000.
[25] Bucher T., Curio C., Edelbrunner J., Igel C., Kastrup D., Leefken I., Lorenz G., Steinhage A., von Seelen W.,“Image Processing and Behavior Planning for Intelligent Vehicles” Industrial Electronics, IEEE Transactions on , Volume: 50 , Issue: 1 , Feb. 2003.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top