跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.81) 您好!臺灣時間:2025/10/04 23:43
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:呂國豪
研究生(外文):Kuo-Hao Lu
論文名稱:交通號誌辨識
論文名稱(外文):Traffic Light Recognition
指導教授:陳淑媛陳淑媛引用關係王照明王照明引用關係
指導教授(外文):Shu-Yuan ChenChao-Ming Wang
學位類別:碩士
校院名稱:元智大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2005
畢業學年度:93
語文別:英文
論文頁數:60
中文關鍵詞:智慧型運輸系統交通號誌偵測交通號誌擷取通號誌辨識顏色辨識
外文關鍵詞:Intelligent transportation systemtraffic light detectiontraffic light extractiontraffic light recognitioncolor identification
相關次數:
  • 被引用被引用:5
  • 點閱點閱:1601
  • 評分評分:
  • 下載下載:233
  • 收藏至我的研究室書目清單書目收藏:1
由於現代化運輸工具技術的不斷精進,使得加強感應器對周遭環境的偵測,以提供駕駛者充分資訊,進而建構一個迅速、安全和方便的駕駛環境已非遙不可及的夢想。
本論文即在提出一自動化的交通號誌辨識系統,以提供充分的訊息,讓駕駛者可正確判斷道路環境,進而協助智慧型運輸系統的建置完成。所提方法可適用於固定式或移動式攝影機,故有其方便性及實用性。
所提方法主要分為三個階段:交通號誌的偵測、擷取及分類。這三階段的設計原理分別是根基於色彩資訊、區塊資訊和幾何特性。在偵測階段,首先將色彩空間RGB轉為HSI,以偵測出具有號誌特定色彩的區塊,接著利用形態學的方法補強區塊的缺損及去除雜訊。在擷取階段,主要是利用區塊標記技巧偵測出號誌可能的位置,接著使用邊緣偵測法將區塊邊緣特徵擷取出來,以利後續分類階段利用號誌幾何外型特徵達成分類目的,亦即辨識出圓形號誌燈和箭頭號誌燈。
最後,經由眾多實驗證實所提方法確實有效且可行。
Advanced technology improves the capabilities of modern vehicles. The innovations of senor-based systems support surrounding survey of the vehicle and display relevant information to the driver. Thus, construction of a safety, convenient and efficiency driving environment can be achieved.
This paper is to propose an automatic traffic light recognition system so that car drivers have sufficient information to make a correct decision. This in turn facilitates the construction of ITS (Intelligent Transportation System). The proposed method can be applied not only to fixed camera but also to movable camera.
Our method consists of three phases: traffic light detection, extraction and classification. The three phases are based on color information, region information and geometric and appearance constraints. In the detection stage, the RGB color space is first converted into HSI color space to detect those regions with specific colors of traffic lights. The morphology technology is then employed to remove hole and noise. In the extraction stage, region labeling is involved to detect candidate regions of traffic lights. Border detection is then employed to obtain region border. In the classification stage, geometric and appearance constrains are derived respectively from traffic light shape and color and used for classification. In this study, traffic lights of circle and arrow shape can both be coped with.
Various experiments have been conducted to demonstrate the effectiveness and practicability of proposed method.
Chapter 1 Introduction
1.1 Motivation
1.2 Problem analysis
1.3 Survey of related studies
1.4 Organization of the thesis
Chapter 2 System Overview
2.1 Systematic flowchart
2.2 Systematic structure
2.3 Image capture
Chapter 3 Traffic Light Detection Using Color Information
3.1 HSI color space
3.2 Morphology
Chapter 4 Traffic Light Extraction Using Region Information
4.1 Region labeling
4.2 Border detection
Chapter 5 Traffic Light Classification Using Geometric and Appearance Constraints
5.1 Circle characteristics
5.1.1 Shape characteristics of circle
5.1.2 Color characteristics of circle
5.2 Arrow characteristics
5.3 Algorithm of traffic light classification
Chapter 6 Environment Adaptability
6.1 Variation in numbers of lights
6.2 Variation in types of lights
6.3 Variation in time duration
Chapter 7 Experimental Results
7.1 Environment and equipment
7.2 Test images
7.3 Results of circle traffic lights
7.4 Results of arrow traffic lights
7.5 Results of adaptation to environment variations
7.6 Performance evaluation
Chapter 8 Conclusions and Future Work
8.1 Conclusions
8.2 Future work
References
[1]鄭伯順, 吳坤榮, 和林建廷, 「影像處理暨圖形識別技術應用於車輛通行管理之研究」, 圖形與識別, vol. 8, No. 4, pp. 49-56,2002。
[2]P. Comelli, P. Ferragina, M.N. Granieri, and F. Stabil, “Optical recognition of motor vehicle license plates,” IEEE Transaction on Vehicular Technology, vol. 44, no. 4, pp. 790-799, 1995.
[3]G. Piccioli, E.D. Michel, P. Parodi, and M. Campani, “Robust method for road sign detection and recognition,” Image and Vision Computing, vol. 14, no. 3 , pp. 209-223, 1996.
[4]A.D.L. Escalera, E. Moreno, M.A. Salichs, and J.M. Armingol, “Road traffic sign detection and classification,” IEEE Transactions on Industrial Electronics, vol. 44, no. 6, 1997.
[5]S.H. Hsu and C.L. Huang, “Road sign detection and recognition using matching pursuit method,” Image and Vision Computing, vol. 19, no. 3, pp. 119-129, 2001.
[6]A.M. Landraud, “Image restoration and enhancement of characters using convex projection methods,” Computer Vision, Graphics and Image Processing, vol. 3, pp. 85-92, 1991.
[7]C.S. Fuh and H.B. Liu, “Projection for pattern recognition,” Image and Vision Computing, vol. 16, no. 9-10, pp. 677-687, 1998.
[8]A.J. McCollum, C.C. Bowman, P.A. Daniels, and B.G. Batchelor, “A histogram modification unit for real-time image enhancement,” Computer Vision, Graphics and Image Processing, vol. 12, pp. 337-398, 1988.
[9]白家榮和陳世旺, 「十字路口行人的偵測及追蹤」,國立台灣師範大學資訊教育研究所, 碩士論文, 2002。
[10]O. Masoud, and N.P. Papanikolopoulos, “A novel method for tracking and counting pedestrians in real-time using a single camera,” IEEE Transactions on Vehicular Technology, vol. 50, no.5, pp. 1267-1278, 2001.
[11]Y.K. Wang and Y.S. Lin, “A real-time approach for classification and tracking of multiple moving objects,” Proc. IPPR Conference on Computer Vision, Graphics, and Image Processing, 2003.
[12]M. Kass, A. Within, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, No. 4, pp. 321-331, 1988.
[13]N. Peterfreund, “Robust tracking of position and velocity with Kalman Snakes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 564-569, 1999.
[14]S. Sun, D. R. Haynor, and Y. Kim, “Semiautomatic video object segmentation using VSnakes,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 1, pp. 75-82, 2003.
[15]F. Lindner, U. Kressel and S. Kaelberer, “Robust recognition of traffic signals,” Proc. IEEE Intelligent Vehicles Symposium, pp. 49-53, 2004.
[16]N.H.C. Yung and A.H.S. Lai, “An effective video analysis method for detecting red light runners,” IEEE Transactions on Vehicular Technology, vol. 50, no. 4, pp. 1074-1084, 2001.
[17]G.K.H. Pang and H.S. Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligence Transportation Systems, vol. 2, no. 3, pp. 135-150, 2001.
[18]D.M. Gavrila, U. Franke, C. Wohler, and S. Gorzig, “Real time vision for intelligent vehicles,” IEEE Instrumentation & Measurement Magazine, vol. 4, no. 2, pp. 22-27, 2001.
[19]U. Gavrila, D. Gorzig, S. Lindner, F. Puetzold, C. Franke, and F. Wohler, “Autonomous driving goes downtown,” IEEE Intelligent Systems, vol. 13, no. 6, pp.40-48, 1998.
[20]Y. Sugawara, M. Akanegawa, Y. Tanaka, and M. Nakagawa, “Improvement of tracking methods in information providing system using LED traffic lights,” Proc. International Conference on Communication System, vol. 2, pp. 1207-1211, 2002.
[21]H.D. Cheng, X.H. Jiang, Y. Sun and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognition, vol. 34, pp. 2259-2281, 2001.
[22]J.D. Foley, A.V. Dam, S.K. Feiner and J.F. Hughes, Computer Graphics: Principles and Practice, U.S.A, 1993.
[23]Y. Ohta, T. Kanade, and T. Sakai, “Color information for region segmentation,” Computer Graphics and Image Processing, vol. 13, pp. 222-241, 1980.
[24]A.D. Bimbo, M. Mugnaini, P. Pala and F. Turco, “Visual querying by color perceptive regions,” Pattern Recognition, vol. 31, pp. 1241-1253, 1998.
[25]C.S. Fuh, S.W. Cho, and K. Essig, “Hierarchical image region segmentation for content-based image retrieval system,” IEEE Transactions on Image Processing, vol. 9, no. 1, pp. 156-162, 2000.
[26]B. Hill, Th. Roger and F.W. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the CIELAB color-difference formula,” ACM Transactions on Graphics, vol. 16, no. 2, pp. 109-154, 1997.
[27]J. Huang, S.R. Kumar, M. Mitra, W.J. Zhu, and R. Zabih, “Image indexing using color correlograms,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 762-768, 1997.
[28]M.S. Kankanhalli, B.M. Mehtre, and H.Y. Huang, “Color and spatial feature content-based image retrieval,” Pattern Recognition Letters, vol. 20, no. 1, pp. 109-118, 1999.
[29]G. Paschos, “Fast color texture recognition using chromaticity moments,” Pattern Recognition Letters, vol. 21, no. 9, pp. 837-841, 2000.
[30]M. Pietikainen, S. Nieminen, E. Marszalec, and T. Ojala, “Accurate color discrimination with classification based on feature distributions,” Proc. International Conference on Pattern Recognition, pp. 833-838, 1996.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top