跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.176) 您好!臺灣時間:2025/09/08 07:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃贊宇
研究生(外文):Chan-Yu Huang
論文名稱:整合車道幾何與車流方向資訊之電腦視覺駕駛輔助系統
論文名稱(外文):Vision-Based Driver Assistance System using Integration Information from Lane Geometry and Traffic Direction
指導教授:傅立成傅立成引用關係
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:95
語文別:英文
論文頁數:67
中文關鍵詞:多車輛偵測多車道線偵測資訊整合平台曲線車道線偵測
外文關鍵詞:multiple vehicle detectionmultiple lane detectionintegratio frameworkcurve lane detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:282
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本論文提出一個偵測車輛前方多車道線以及多車輛的方法。我們假設車道線偵測以及車輛偵測會各自獨尋找影像中的車道以及車輛,透過整合車道線偵測及車輛偵測的結果資訊就可取得更為準確的偵測結果。
在車道線偵測方面,車道線特徵的分析常受到前方車輛的邊線或是車輛上的顏色所影響,導致特徵分析錯誤。同樣的在車輛偵測方面,出現在背景中與車輛相似的特徵會干擾車輛特徵之分析,導致車輛偵測不穩定的發生。
因此,本論文中使用車輛假設之位置與車道中心之間的距離,過濾不可能為車輛的物件。另一方面,使用車輛移動方向與車道線之方向的相似程度,取得最佳的車道線偵測結果。為了整合此些資訊,我們使用反覆執行之最佳化演算法將資料整合並取得最佳的結果,並利用車道線偵測以及車輛偵測之方法取得近似的結果,取得整合所需的資訊。最後在實驗結果的驗證中,我們與一般的偵測方法做比較並驗證出本論文方法之效果。
This thesis presents an approach to detect multiple lane and vehicles. Instead of assuming that the processes of lane and vehicle detection should do independently, we integrate these two processes in a mutually supporting way to achieve more accurate results.
In lane boundary detection, the process of identifying possible features of a lane boundary is often affected by the edges and color of the vehicles on the road. Likewise, the results of vehicle detection could be non-robust if there are some background features which can confuse the process of indentifying possible vehicle features.
Thus, in the thesis, we use the distance between the central position of a lane and the position of the hypothesized vehicle to filter out the non-vehicle object. And we use the similarity between the lane boundary direction and the moving directions of the hypothesized vehicles to get the optimal lane solution. By applying iterative optimization algorithm, we can obtain the near-optimal solutions of both lane and vehicle detections. Finally the experimental results are provided to validate the effectiveness of the proposed novel approach.
誌 謝 i
中文摘要 ii
Abstract iii
Contents iv
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Related work 3
1.3 Objective 4
1.4 System Overview 5
1.5 Organization 7
Chapter 2 Preliminary 8
2.1 Camera Calibration 8
2.1.1 Camera Configuration 8
2.1.2 Transformation Formulation 10
2.1.3 Vanishing Point 11
2.2 Lane Detection Procedures 12
2.2.1 Feature Extraction 13
2.2.2 Search Methods 15
2.2.3 Lane Structure 16
2.3 Vehicle Detection Procedures 16
2.3.1 Hypothesis Generation 16
2.3.2 Hypothesis Verification 17
Chapter 3 Lane Hypothesis Generation 19
3.1 Overview 19
3.2 Formulation 20
3.2.1 Hybrid Lane Boundary Model 20
3.2.2 Configuration of a Lane Hypothesis 22
3.3 Line Marking Feature Extraction 23
3.3.1 Feature of Lane Marking 23
3.3.2 Line Segment Construction 24
3.4 Hypothesis Generation 26
3.4.1 Line Segment Pairing 26
3.4.2 Solution Graph Construction 27
3.4.3 Lane Boundary Model Fitting 30
3.4.4 Multiple Lane Extension 31
Chapter 4 Vehicle Hypothesis Generation 33
4.1 Overview 33
4.2 Vehicle Generation with Particle Filter 34
4.2.1 Initial Sampling 34
4.2.2 Propagation 36
4.2.3 Observation 37
4.3 Feature Cues of Vehicles 37
4.3.1 Bounding Box of a Sample 38
4.3.2 Underneath Cue 38
4.3.3 Vertical Edge Cue 39
4.3.4 Symmetry Cue 39
4.3.5 Taillight Cue 40
4.3.6 Cue Fusion 42
4.4 Hypothesis Generation 42
4.4.1 Sample Clustering for Mean-Shift 42
4.4.2 Hypothesis Tracking 43
Chapter 5 Integration framework 45
5.1 Overview 45
5.2 Integration Model 46
5.3 Confidence Initialization 47
5.3.1 Lane Confidence Initialization 48
5.3.2 Vehicle Confidence Initialization 49
5.4 Likelihood of Hypotheses 49
5.4.1 Vehicle Likelihood Estimation 49
5.4.2 Lane Likelihood Estimation 51
5.5 Hypotheses Integration 52
5.5.1 Reweighting of Confidence 53
5.5.2 Iterative Algorithm of Integration 53
5.5.3 Finding the Optimal Solution 54
Chapter 6 Experiments 56
6.1 Environment Description 56
6.2 Environment Results 56
6.3 Performance Analysis 58
Chapter 7 Conclusion 61
Reference 63
[1]"Deaths and Injuries of traffic accidents by County and City," National Police Agency: Ministry of the Interior of R.O.C.
[2]Y.-M. Chan, J.-F. Tsai, C.-Y. Huang, L.-C. Fu, P.-Y. Hsiao, E.-L. Jian, Y.-H. Chen, and H.-P. Lin, "Lane Detection Using a Piecewise-Linear Model," in Chinese Automatic Control Society Automatic Control Conference, Taipei,Taiwan, 2006.
[3]Y. Wang, E. K. Teoh, and D. Shen, "Lane detection and tracking using B-Snake," Image and Vision Computing, vol. 22, pp. 269-280, 2004.
[4]S.-S. Huang, C.-J. Chen, P.-Y. Hsiao, and L.-C. Fu, "On-Board Vision System for lane Recognition and Front-Vehicle Detection to Enhance Driver''s Awareness," in IEEE International Conference on Robotics and Automation, New Orleans, U.S.A., 2004, pp. 2456-2461.
[5]M. Bertozzi and A. Broggi, "GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection," IEEE Transactions on Image Processing, vol. 7, January 1998.
[6]K.-Y. Chiu and S.-F. Lin, "Lane Detection using Color-Based Segmentation," in IEEE Intelligent Vehicles Symposium, 2005.
[7]Y. He, H. Wang, and B. Zhang, "Color-Based Road Detection in Urban Traffic Scenes," IEEE Transactions on Intelligent Transportation Systems, vol. 5, pp. 309-318, December 2004.
[8]R. Labayrade, J. Douret, J. Laneurit, and R. Chapuis, "A reliable and robust lane detection system based on the parallel use of three algorithms for driving safety assistance," Ieice Transactions on Information and Systems, vol. E89D, pp. 2092-2100, Jul 2006.
[9]W. Enkelmann, "Video-Based Driver Assistance--From Basic Functions to Applications," International Journal of Computer Vision, vol. 45, pp. 201-221, 2001.
[10]H. Y. Cheng, B. S. Jeng, P. T. Tseng, and K. C. Fan, "Lane Detection With Moving Vehicles in the Traffic Scenes," IEEE Transactions on Intelligent Transportation Systems, vol. 7, pp. 571-582, 2006.
[11]C. Hoffman, T. Dang, and C. Stiller, "Vehicle detection fusing 2D visual features," in IEEE Intelligent Vehicles Symposium, 2004, pp. 280-285.
[12]B. Margrit, H. Esin, and S. D. Larry, "Real-time multiple vehicle detection and tracking from a moving vehicle," Machine Vision and Applications, vol. V12, pp. 69-83, 2000.
[13]S. M. Smith and J. M. Brady, "ASSET-2: real-time motion segmentation and shape tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, pp. 814-820, 1995.
[14]B. Heisele and W. Ritter, "Obstacle detection based on color blob flow," in Proceedings of The Intelligent Vehicles Symposium 1995, pp. 282-286.
[15]J. M. Ferryman, A. D. Worrall, G. D. Sullivan, and K. D. Baker, "A generic deformable model for vehicle recognition," in Proceedings of the 1995 British conference on Machine vision (Vol. 1) Birmingham, United Kingdom: BMVA Press, 1995.
[16]A. Khammari, F. Nashashibi, Y. Abramson, and C. Laurgeau, "Vehicle detection combining gradient analysis and AdaBoost classification," in IEEE Conference on Intelligent Transportation Systems, Vienna, Austria, 2005.
[17]T. Kato, Y. Ninomiya, and I. Masaki, "Preceding Vehicle Recognition Based on Learning From Sample Images," IEEE Transactions on Intelligent Transportation Systems, vol. 3, pp. 252-260, 2002.
[18]H. Schneiderman and T. Kanade, "A Statistical Method for 3D Object Detection Applied to Faces and Cars," in Computer Vision and Pattern Recognition, 2000.
[19]J. Marie-Pierre Dubuisson, L. Sridhar, and K. J. Anil, "Vehicle Segmentation and Classification Using Deformable Templates," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, pp. 293-308, 1996.
[20]A. Bensrhair, A. Bertozzi, A. Broggi, A. Fascioli, S. Mousset, and G. Toulminet, "Stereo vision-based feature extraction for vehicle detection," in IEEE Intelligent Vehicle Symposium, 2002, pp. 465-470 vol.2.
[21]M. Suwa, Y. Wu, M. Kobayashi, M. Kimachi, and S. Ogata, "A stereo-based vehicle detection method under windy conditions," in IEEE Proceedings on Intelligent Vehicles Symposium, 2000, pp. 246-248.
[22]O. Ramström and H. Christensen, "A Method for Following Unmarked Roads," in IEEE Intelligent Vehicle Symposium, 2005.
[23]M. A. Sotelo, F. J. Rodriguez, and L. Magdalena, "VIRTUOUS: Vision-Based Road Transportation for Unmanned Operation on Urban-Like Scenarios," IEEE Transactions on Intelligent Transportation Systems, vol. 5, pp. 69-83, June 2004.
[24]J. D. Crisman and C. E. Thorpe, "UNSCARF: A Color Vision System for the Detection of Unstructured Roads," in IEEE International Conference on Robotics and Automation, 1991.
[25]P. Jeong and S. Nedevschi, "Efficient and Robust Classification Method Using Combined Feature Vector for Lane Detection," IEEE Transactions on Circuits and System for Video Technology, vol. 15, pp. 528-537, 2005.
[26]C. Rasmussen, "Road Shape Classification for Detecting and Negotiating Intersections," in IEEE Intelligent Vehicles Symposium, 2003.
[27]C. Rasmussen, "Combining Laser Range, Color, and Texture Cues for Autonomous Road Following," in International Conference on Robotics and Automation, 2002.
[28]S. Beucher and M. Bilodeau, "Road segmentation and obstacle detection by a fast watershed transformation," in IEEE Intelligent Vehicles Symposium 1994, pp. 296-301.
[29]A. Gern, R. Moebus, and U. Franke, "Vision-based lane recognition under adverse weather conditions using optical flow," in IEEE Intelligent Vehicle Symposium, 2002, pp. 652-657.
[30]A. Watanabe and M. Nishida, "Lane detection for a Steering Assistance System," in IEEE Intelligent Vehicles Symposium, 2005.
[31]D. J. Kang, J. W. Choi, and I. S. Kweon, "Finding and Tracking Road Lanes using "Line-Snakes"," in IEEE Intelligent Vehicles Symposium, 1996.
[32]C. R. Jung and C. R. Kelber, "Lane following and lane departure using a linear-parabolic model," Image and Vision Computing, vol. 23, pp. 1192-1202, Nov 2005.
[33]Y. Wang, D. G. Shen, and E. K. Teoh, "Lane detection using spline model," Pattern Recognition Letters, vol. 21, pp. 677-689, Jul 2000.
[34]S. Zehang, G. Bebis, and R. Miller, "On-road vehicle detection: a review," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 694-711, 2006.
[35]M. Betke, E. Haritaoglu, and L. S. Davis, "Real-time multiple vehicle detection and tracking from a moving vehicle," Machine Vision and Applications, vol. 12, pp. 69-83, 2000.
[36]I. Michael and B. Andrew, "CONDENSATION—Conditional Density Propagation for Visual Tracking," International Journal of Computer Vision, vol. 29, pp. 5-28, 1998.
[37]C. C. Wang, S. S. Huang, and L. C. Fu, "Driver assistance system for lane detection and vehicle recognition with night vision," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, 2005, pp. 3530-3535.
[38]D. Comaniciu, V. Ramesh, and P. Meer, "Real-time tracking of non-rigid objects using mean shift," in IEEE Conference on Computer Vision and Pattern Recognition, 2000, pp. 142-149 vol.2.
[39]S. Theodoridis and K. Koutroumbas, "Sequential Clustering Algorithm," in Pattern Recognition, second ed, 2003, p. 433.
[40]Y. Barshalom and E. Tse, "Tracking in a Cluttered Environment with Probabilistic Data Association," Automatica, vol. 11, pp. 451-460, 1975.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top