(34.204.185.54) 您好!臺灣時間:2021/04/16 19:12
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:徐震濤
研究生(外文):Chen-Tao Hsu
論文名稱:可適應亮度與距離變化的盲點區域車輛偵測技術
論文名稱(外文):Light- and distance-adaptive vehicle detection in blind-spot areas
指導教授:曾定章曾定章引用關係
指導教授(外文):Din-Chang Tseng
學位類別:博士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:122
中文關鍵詞:先進駕駛輔助系統盲點偵測光流車底陰影亮度適應性距離適應性
外文關鍵詞:Advanced driver assistance systemblind spot detectionoptical flowunderneath shadowlight adaptivedistance adaptive
相關次數:
  • 被引用被引用:0
  • 點閱點閱:179
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:25
  • 收藏至我的研究室書目清單書目收藏:0
當一位駕駛人行駛在道路上要變換車道時,駕駛人須轉頭掃視側後方觀察在側邊車道是否有接近的來車。然而,以上行為所的視野範圍是有限的,會有一個不可見的盲點區域存在。為避免在變換車道時可能發生的交通事故,我們在此提出一個車道變換輔助系統以輔助駕駛人注意左右來車。在本系統中,將兩個相機裝在己車兩邊後照鏡下方來偵測接近車輛。我們以來車的光流特徵、邊點特徵,及車底陰影等動靜態特徵來偵測來車。在此視覺偵測中,有兩個主要的問題會影響偵測結果,一個是環境明暗度的問題,另一個是遠近不同造成來車大小與速度不同的透視投影效應問題。在本研究中,我們提出兩個適應性的方法:亮度適應性及距離適應性方法來克服這兩個問題。在戶外環境中,駕駛人必須面對不同的天候與環境變化,這些不同的天候與環境會造成影像內容有明暗的變異而使得車輛偵測變得更困難。本研究使用一個二維亮度/梯度分佈圖(2D brightness/gradient magnitude histogram)在不同的亮度變化下,適應地判斷出邊點、陰影、路面標線,及其他不同成份。另一方面,當側邊車輛在真實空間以更高的速度接近己車時,在影像平面上會因透視投影效應,來車速度與大小會增加。在本研究中,我們也提出一個距離適應性的方法來補償影像平面中水平方向的光流長度。經由距離適應性補償後,水平方向上的光流長度則不受距離影響。除了以上亮度與距離適應性的問題外,此盲點偵測系統也克服了路標、樹蔭的誤判,及側邊等速車的偵測不到的問題。
本方法包含四個階段:i. 二維數量分佈圖估算亮度適應性門檻值、ii.光流估算、iii. 靜態特徵擷取,及iv. 結合動靜態偵測的決策法則。在實驗中,我們以14段測試影片中的6842張影像,來評估系統的表現度。這些影片中,有六種不同亮度情形。首先,我們比較了有無亮度適應性的表現度。在無亮度適應性的方法中,我們將三種門檻值(車底陰影、路面標線,以及邊點) 27組不同給定值,並將此27組值分別應用於五種方法:僅靜態(S)、僅動態(M)、動靜皆有(S&M)、動靜擇一(SorM),以及不包含亮度適應性之動靜結合(SM)等。比較此五種方法後發現,我們需要一個亮度適應性的方法用以動態調整三種門檻值。再加入亮度適應性的方法後,動靜結合配合亮度適應性的方法準確率可達91.84%的正確率、7.12%的誤判率,以及1.04漏失率。動靜結合有亮度適應性準確率較無亮度適應性增加16.68%。在距離適應性功能上,首先,我們證明了經過補償後,水平方向光流可不受距離的影響。其次,我們發展了一個距離適應性的方法來偵測側邊車輛,距離適應性方法可達93.88%準確率、5.36%誤判率,以及0.76%漏失率。

A driver wants to change lane when driving on a road, he must glance the rearview and outside mirrors of his vehicle and turn his head to scan the possible approaching vehicles on the side lanes. However, the field of view by the above behavior is limited; there is a blind spot area invisible. To avoid the possible traffic accident during lane change, we here propose a lane change assistance system to assist drivers changing lane. In this system, two cameras are mounted under outside mirrors of the host vehicle to capture rear-side-view images for detecting approaching side vehicles. In this application of visual detection, there are two main problems influencing the detection results, they are various light conditions and perspective projection effect. In this study, we present two adaptive methods to overcome the two problems. Drivers have to face various weather and environment conditions when driving. The different weather conditions cause various light conditions, and the various light conditions may influence the detection results. The proposed method uses a 2-D intensity/gradient histogram to adaptively judge the edge, shadow, lane marks, and other scene components in various light conditions. On the other hand, when a side vehicle runs toward to the host vehicle with constant speed, the size and speed of the side vehicle appear increasing in images due to perspective projection effect. In this study, we propose a distance-adaptive method to compensate the horizontal optical-flow vectors in images. After the compensation, the horizontal optical-flow vectors are invariant to distance. Besides the above light and distance variant problems, the blind-spot vehicle detection also encounters the problems of false detection on lane marks and tree shadow on ground, and loss detection on similar-speed side vehicles. Thus, both static and motion features are adopted and sophisticatedly judged to detect side vehicles in this study.
The proposed system consists of four stages: estimation of light-adaptive threshold values with a 2-D intensity/gradient histogram, multiresolution optical flow estimation with distance-adaptive compensation, static feature detection, and detection decision based on the static and motion features. In experiments 6842 images in 14 side-vehicle videos were tested to evaluate the performance of the proposed systems. These videos were captured from six kinds of light conditions. Without light-adaptive function, we set 27 groups given values to three kinds of threshold values: underneath shadow, lane marking, and edge points. And also, we apply the 27 groups given value to five detection methods: only static (S), only motion (M), static and motion (S&M), static or motion (SorM), and mutually combing static and motion features without light-adaptive method (SM). Comparing the results of five detection methods, we find that we need the light-adaptive method to adjust the three kinds of threshold dynamically. After applying light-adaptive detection method, the accuracy of mutually combining static and motion features with light-adaptive method achieves 91.84% detection rate, 7.12% false alarm rate, and 1.04% missing rate. The accuracy of mutually combining static and motion features with light-adaptive method improve 16.68% than without light-adaptive method. The missing rate with light-adaptive is lower 19.82% than without light-adaptive method. With distance-adaptive function, first, we prove that the horizontal optical-flow vectors are invariant to distance after compensation. Second, we develop a distance-adaptive detection method to detect side vehicle. The accuracy of distance-adaptive method achieves 93.88% detection rate, 5.36% false alarm rate, and 0.76% missing rate.

Contents
Abstract i
中文摘要 iii
誌謝 v
List of Tables xv
Chapter 1 Introduction 1
1.1. Motivation 1
1.2. Overview of this study 2
1.3. Organization of this dissertation 10
Chapter 2 Related Works 11
2.1. Stereo-vision detection methods 11
2.2. Static-feature detection methods 12
2.3. Motion-feature detection methods 16
Chapter 3 Pre-adjustments 20
3.1. The setting of detection area 20
3.2. Distance compensation factors 22
3.3. Intensity-balance function 24
Chapter 4 The Light-adaptive Method 26
4.1. The light-adaptive principle 26
4.2. The 2-D histogram generation 28
4.3. 2-D histogram analysis 31
Chapter 5 The Distance-adaptive Method 36
5.1. The derivation of distance-adaptive algorithm 36
5.2. The error analysis of distance-adaptive algorithm 39
5.2.1. The error analysis of relative velocity between the host vehicle and a side vehicle along moving tracking in real world 39
5.2.2. The error analysis of the distance between the host vehicle and a side vehicle in real world 41
5.3. The procedure of optical-flow compensation 44
Chapter 6 Feature Extraction and Vehicle Detection Strategy 45
6.1. Static feature extraction 45
6.2. Motion feature extraction 47
6.3. The robust decision for vehicle detection 50
6.4. The light-adaptive vehicle detection strategy 51
6.5. The distance- and light- adaptive vehicle detection strategy 53
Chapter 7 Experiments 57
7.1. Experiments environment and images 57
7.2. Evaluation critera 58
7.3. Comparison among different detection strategies 60
7.4. Selection of adaptive parameters in the proposed methods 70
7.5. Detection results of the light-adaptive vehicle detection strategy 73
7.6. The detection results of distance- and light-adaptive vehicle detection strategy 81
7.7. Average execution time of each detection strategy 87
Chapter 8 Conclusions 88
References 90


References
[1] National Highway Traffic Safety Administration, Fatality Analysis Reporting System Encyclopedia, on http://www-fars.nhtsa.dot.gov/.
[2] R. Sosa and G. Velazquez, “Obstacles detection and collision avoidance system developed with virtual models,” in Proc. of IEEE Conf. on Intelligent Vehicular Electronics and Safety, Beijing, China, Dec.13-15, 2007, pp.1-8.
[3] T. Mondal, R. Ghatak, and S. R. B. Chaudhuri, “Design and analysis of a 5.88GHz microstrip phased array antenna for intelligent transport systems,” in Proc. conf. on Int. Symp. on Antennas and Propagation, Toronto, Canada, Jul.11-17, 2010, pp.1-4.
[4] J. Teizer, B. S. Allread, and U. Mantripragada, “Automating the blind spot measurement of construction equipment,” Automation in Construction, vol.19, issue 4, pp.491-501, 2010.
[5] T. Mondal, P. Shishodiya, R. Ghatak, and S. R. B. Chaudhuri, "Vehicular radio scanner using phased array antenna for dedicated short range communication service," Journal of Electromagnetic Analysis and Applications, vol.4, no.9, pp.362-366, 2012.
[6] S. Sivaraman and M. M. Trivedi, "A review of recent developments in vision-based vehicle detection," in Proc. of IEEE Intelligent Vehicles Symp., Queensland, Australia, Jun.23-26, 2013, pp.310-315.
[7] G. Toulminet, M. Bertozzi, S. Mousset, A. Bensrhair, and A. Broggi, "Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis," IEEE Transactions on Image Processing, vol.15, no.8, pp.2364-2375, 2006.
[8] P. Chang, D. Hirvonen, T. Camus, and B. Southall, "Stereo-based object detection, classification, and quantitative evaluation with automotive applications," in Proc. of Computer Vision and Pattern Recognition-Workshops, San Diego, CA, Jun.25, 2005, pp.62-62.
[9] I. Cabani, G. Toulminet, and A. Bensrhai, "Contrast-invariant obstacle detection system using color stereo vision," in Proc. of Intelligent Transportation Systems, Beijing, China, Oct.12-15, 2008, pp.1032-1037.
[10] U. Franke, C. Rabe, H. Badino, and S. Gehrig, "6d-vision: Fusion of stereo and motion for robust environment perception," Lecture Notes in Computer Science Pattern Recognition, vol.3663, pp.216-223, 1973.
[11] R. E. Kalman, "A new approach to linear filtering and prediction problems," Journal of Fluids Engineering, vol.82, no.1, pp.35-45, 1960.
[12] A. Barth and U. Franke, "Estimating the driving state of oncoming vehicles from a moving platform using stereo vision," IEEE Transactions on Intelligent Transportation Systems, vol.10, no.4, pp.560-571, 2009.
[13] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, "Vehicle detection using normalized color and edge map," IEEE Transactions on Image Processing, vol.16, no.3, pp.850-864, 2007.
[14] Z. W. Kim and J. Malik, "Fast vehicle detection with probabilistic feature grouping and its application to vehicle tracking," in Proc. of IEEE International Conference on Computer Vision, Ninth, France, Oct.13-16, 2003, pp.524-531.
[15] M. Betke, E. Haritaoglu, and L. S. Davis, "Real-time multiple vehicle detection and tracking from a moving vehicle," Machine vision and applications, vol.12, no.2, pp.69-83, 2000.
[16] Y.-L. Chen, B.-F. Wu, H.-Y. Huang, and C.-J. Fan, "A real-time vision system for nighttime vehicle detection and traffic surveillance," IEEE Transactions on Industrial Electronics, vol.58, no.5, pp.2030-2044, 2011.
[17] S.-S. Huang, C.-J. Chen, P.-Y. Hsiao, and L.-C. Fu, "On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness," in Proc. IEEE International Conference on Robotics and Automation, vol.3, New Orleans LA, USA, Apr.30-May 1, 2004, pp.2456-2461.
[18] H. Mori, N. M. Charkari, and T. Matsushita, "On-line vehicle and pedestrian detections based on sign pattern," IEEE Transactions on Industrial Electronics, vol.41, no.4, pp.384-391, 1994.
[19] N. Srinivasa, "Vision-based vehicle detection and tracking method for forward collision warning in automobiles," in Proc. of IEEE Intelligent Vehicle Symposium, vol.2, Versailles, France, Jun.17-21, 2002, pp.626-631.
[20] Z. Sun, G. Bebis, and R. Miller, "On-road vehicle detection using evolutionary Gabor filter optimization," IEEE Transactions on Intelligent Transportation Systems, vol.6, no.2, pp.125-137, 2005.
[21] R. Mehrotra, K. R. Namuduri, and N. Ranganathan, "Gabor filter-based edge detection," Pattern Recognition, vol.25, no.12, pp.1479-1494, 1992.
[22] T. P. Weldon, W. E. Higgins, and D. F. Dunn, "Efficient Gabor filter design for texture segmentation," Pattern Recognition, vol.29, no.12, pp.2005-2015, 1996.
[23] A. K. Jain and F. Farrokhnia, "Unsupervised texture segmentation using Gabor filters," in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Los Angeles, CA, Nov.4-7, 1990, pp.14-19.
[24] T. Hofmann, J. Puzicha, and J. M. Buhmann, "Unsupervised texture segmentation in a deterministic annealing framework," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.20, no.8, pp.803-818, 1998.
[25] Y. Hamamoto, S. Uchimura, M. Watanabe, T. Yasuda, Y. Mitani, and S. Tomita, "A Gabor filter-based method for recognizing handwritten numerals," Pattern Recognition, vol.31, no.4, pp.395-400, 1998.
[26] K.-C. Chung, S. Kee, and S. Kim, "Face recognition using independent component analysis of Gabor filter responses," in Proc. of IAPR Workshop on machine vision applications, Tokyo, Japan, Nov.28-30, 2000, pp.331-334.
[27] B. S. Manjunath and W.-Y. Ma, "Texture features for browsing and retrieval of image data," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.18, no.8, pp.837-842, 1996.
[28] T. P. Weldon, W. E. Higgins, and D. F. Dunn, "Gabor filter design for multiple texture segmentation," Optical Engineering, vol.35, no.10, pp.2852-2863, 1996.
[29] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, and S. Sirotti, "Improving shadow suppression in moving object detection with HSV color information," in Proc. of Intelligent Transportation Systems, Oakland, CA, Aug.25-29, 2001, pp.334-339.
[30] A. Kammari, F. Nashashibi, Y. Abramson, and C. Laurgeau, "Vehicle detection combining gradient analysis and AdaBoost classification," in Proc. of IEEE Intelligent Transportation Systems, Vienna, Austria, Sep.13-15, 2005, pp.66-71.
[31] S. Avidan, "Subset selection for efficient SVM tracking," in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1, Madison, Wisconsin, Jun.18-20, 2003, pp.I-85-I-92.
[32] Y. Freund and R. E. Schapire, "A desicion-theoretic generalization of on-line learning and an application to boosting," Computational learning theory, vol.904, pp.23-37, 2005.
[33] T. K. ten Kate, M. B. van Leewen, S. E. Moro-Ellenberger, B. J. F. Driessen, A. H. G. Versluis, and F. C. A. Groen, "Mid-range and distant vehicle detection with a mobile camera," in Proc. of IEEE Intelligent Vehicles Symposium, Parma, Italy, Jun.14-17, 2004, pp.72-77.
[34] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, "Detecting moving objects, ghosts, and shadows in video streams," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.25, no.10, 2003, pp.1337-1342.
[35] K. She, G. Bebis, H. Gu, and R. Miller, "Vehicle tracking using on-line fusion of color and shape features," in Proc. of IEEE Conference on Intelligent Transportation Systems, Washington DC, USA, Oct.3-6, 2004, pp.731-736.
[36] Z. Sun, G. Bebis, and R. Miller, "Monocular precrash vehicle detection: features and classifiers," IEEE Transactions on Image Processing, vol.5, no.7, pp.2019-2034, 2006.
[37] C.-C. R. Wang and J.-J. J. Lien, “Automatic vehicle detection using local features: a statistical approach,” IEEE Transactions on Intelligent Transportation Systems, vol.9, no.1, pp.83-96, 2008.
[38] W.-C. Chang and C.-W. Cho, “Online boosting for vehicle detection,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol.40, no.3, pp.892-902, 2010.
[39] S. Sivaraman and M. M. Trivedi, "A general active-learning framework for on-road vehicle recognition and tracking," IEEE Transactions on Intelligent Transportation Systems, vol.11, no.2, pp.267-276, 2010.
[40] S. Sivaraman and M. M. Trivedi, "Active learning for on-road vehicle detection: A comparative study," Machine vision and applications, vol.25, no.3, pp.599-611, 2014.
[41] Q. Yuan, A. Thangali, V. Ablavsky, and S. Sclaroff, "Learning a family of detectors via multiplicative kernels," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.33, no.3, pp.514-530, 2011.
[42] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection." in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1, San Diego, CA, Jun.25, 2005, pp.886-893.
[43] C. Cortes and V. Vapnik, "Support-vector networks," Machine learning, vol.20, no.3, pp.273-297, 1995.
[44] H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, "On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation," IEEE Transactions on Intelligent Transportation Systems, vol.13, no.2, pp.748-758, 2012.
[45] Y. Hayakawa and O. Fukata, All Round Blind Spot Detection by Lens Condition Adaptation based on Rearview Camera Images, SAE Technical Paper, 2013.
[46] B.-F. Wu, H.-Y. Huang, C.-J. Chen, Y.-H. Chen, C.-W. Chang, and Y.-L. Chen, "A vision-based blind spot warning system for daytime and nighttime driver assistance," Computers and Electrical Engineering, vol.39, no.3, pp.846-862, 2013.
[47] S. Sivaraman and M. M. Trivedi, "Vehicle detection by independent parts for urban driver assistance," IEEE Transactions on Intelligent Transportation Systems, vol.14, no.4, pp.1597-1608, 2013.
[48] P. Chanawangsa and C. W. Chen, "A novel video analysis approach for overtaking vehicle detection," in Proc. of IEEE International Conference on Connected Vehicles and Expo, Las Vegas, NV, Dec.2-6, 2013, pp.802-807.
[49] A. Broggi, E. Cardarelli, S. Cattani, P. Medici, and M. Sabbatelli, "Vehicle detection for autonomous parking using a Soft-Cascade AdaBoost classifier," in Proc. of Intelligent Vehicles Symposium, Dearborn, MI, Jun.8-11, 2014, pp.912-917.
[50] A. Haselhoff and A. Kummert, "A vehicle detection system based on haar and triangle features," in Proc. of IEEE Intelligent Vehicles Symposium, Xi'an, China, Jun.3-5, 2009, pp.261-266.
[51] S. Singh, R. Meng, S. Nelakuditi, Y. Tong, and S. Wang, "SideEye: Mobile assistant for blind spot monitoring," in Proc. of IEEE International Conference on Computing, Networking and Communications, Honolulu, HI, Feb.3-6, 2014, pp.408-412.
[52] O. Achler and M. M. Trivedi, "Vehicle wheel detector using 2D filter banks," in Proc. IEEE Intelligence Vehicles Symp., Parma, Italy, Jun.14-17, 2004, pp.25-30.
[53] N. Blanc, B. Steux, and T. Hinz, "LaRASideCam - a fast and robust vision-based blindspot detection system," in Proc. IEEE Intelligent Vehicles Symp., Istanbul, Turkey, Jun.13-15, 2007, pp.480-485.
[54] R. O’Malley, M. Glavin, and E. Jones, "Vision-based detection and tracking of vehicles to the rear with perspective correction in low-light conditions," IET Intell. Transp. Syst., vol.5, no.1, pp.1-10, 2011.
[55] E. Y. Chung, H. C. Jung, E. Chang, and I. S. Lee, "Vision based for lane change decision aid system," in Proc. 1st Int. Forum on Strategic Technology, Ulsan, Korea, Oct.18-20, 2006, pp.10-13.
[56] M. Krips, J. Velten, A. Kummert, and A. Teuner, "AdTM tracking for blind spot collision avoidance," in Proc. of IEEE Intelligent Vehicles Symp., Parma, Italy, Jun.14-17, 2004, pp.544-548.
[57] B.-F. Wu, Y.-H. Chen, C.-C. Kao, Y.-F. Li, and C.-J. Chen, “A vision-based collision warning system by surrounding vehicles detection,” KSII Trans. on Internet and Information Systems, vol.6, no.4, pp.1203-1222, 2012.
[58] B.-F. Wu, C.-C. Kao, Y.-F. Li, and M.-Y. Tsai, "A real-time embedded blind spot safety assistance system," International Journal of Vehicular Technology, vol.2012, pp.1-15 doi:10.1155/2012/506235, 2012.
[59] J.-D. Lee and K.-F. Huang, “A warning system for obstacle detection at vehicle lateral blind spot area,” in Proc. of IEEE conf. on Modeling Symposium, Hong Kong, China, Jun.23-25, 2013, pp.154-159.
[60] J. Arróspide, L. Salgado, and M. Camplani, “Image-based on-road vehicle detection using cost-effective histograms of oriented gradients,” Journal of Visual Communication and Image Representation, vol.24, issue 7, pp.1182-1190, 2013.
[61] W. Li, P. Liu, Y. Wang, and H. Ni, ”Multifeature fusion vehicle detection algorithm based on choquet integral,” Journal of Applied Mathematics, vol.2014, Article ID 701058, 2014.
[62] R. K. Satzoda and M. M. Trivedi, “Overtaking & receding vehicle detection for driver assistance and naturalistic driving studies,” in Proc. of IEEE Conference on Intelligent Transportation Systems, Qingdao, China, Oct.8-11, 2014, pp.697-702.
[63] X. Wen, L. Shao, Y. Xue, and W. Fang, “A rapid learning algorithm for vehicle classification,” Information Sciences, vol.295, no.20, pp.395-406, 2015.
[64] Y.-M. Tsai, K.-Y. Huang, C.-C. Tsai, and L.-G. Chen, "An exploration of on-road vehicle detection using hierarchical scaling schemes," in Proc. of IEEE International Conference on Image Processing, Hong Kong, China, Sep.26-29, 2010, pp.3937-3940.
[65] B.-F. Lin, Y.-M. Chan, L.-C. Fu, P.-Y. Hsiao, L.-A. Chuang, and S.-S. Huang, "Incorporating appearance and edge features for vehicle detection in the blind-spot area," in Proc. of IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, Sep.19-22, 2010, pp.869-874.
[66] B.-F. Lin, Y.-M. Chan, L.-C. Fu, P.-Y. Hsiao, L.-A. Chuang, S.-S. Huang, and M.-F. Lo, "Integrating appearance and edge features for sedan vehicle detection in the blind-spot area," IEEE Transactions on Intelligent Transportation Systems, vol.13, no.2, pp.737-747, 2012.
[67] V. Milanés, D. F. Llorca, J. Villagrá, J. Pérez, C. Fernández, I. Parra, C. González, and M. A. Sotelo, "Intelligent automatic overtaking system using vision for vehicle detection," Expert Systems with Applications, vol.39, no.3, pp.3362-3373, 2012.
[68] K. S. C. Kumar and T. Elxs, "An application for stereo vision based vehicle/obstacle detection for driver assistance," International Journal of Computer Applications, vol.69, no.21, pp.13-17, 2013.
[69] A. Jazayeri, H. Cai, J. Y. Zheng, and M. Tuceryan, "Vehicle detection and tracking in car video based on motion model," IEEE Transactions on Intelligent Transportation Systems, vol.12, no.2, pp.583-595, 2011.
[70] A. Provskaya and S. Thrun, "Model based vehicle detection and tracking for autonomous urban driving," Autonomous Robots, vol.26, issue.2, pp.123-139, 2009.
[71] P. H. Batavia, D. Pomerleau, and C. E. Thorpe, "Overtaking vehicle detection using implicit optical flow," in Proc. of IEEE Conference on Intelligent Transportation System, Boston, MA, Nov.9-12, 1997, pp.729-734.
[72] W. Krüger, W. Enkelmann, and S. Rössle, "Real-time estimation and tracking of optical flow vectors for obstacle detection," in Proc. of the Intelligent Vehicles'95 Symposium, Detroit, MI, Sep.25-26, 1995, pp.304-309.
[73] A. Talukder and L. Matthies, "Real-time detection of moving objects from moving vehicles using dense stereo and optical flow," in Proc. of IEEE International Conference on Intelligent Robots and Systems, vol.4, Sendai, Japan, Sep.28-Oct.2, 2004, pp.3718-3725.
[74] Y. Zhu, D. Comaniciu, M. Pellkofer, and T. Koehler, “Reliable detection of overtaking vehicles using robust information fusion,” IEEE Transactions on Intelligent Transportation Systems, vol.7, no.4, pp.401-414, 2006.
[75] A. Jazayeri, H. Cai, J. Y. Zheng, and M. Tuceryan, "Vehicle detection and tracking in car video based on motion model," IEEE Transactions on Intelligent Transportation Systems, vol.12, no.2, pp.583-595, 2011.
[76] F. Garcia, P. Cerri, A. Broggi, A. D. Escalera, and J. M. Armingol, "Data fusion for overtaking vehicle detection based on radar and optical flow," in Proc. of IEEE Intelligent Vehicles Symp., Alcala de Henares, Spain, Jun.3-7, 2012, pp.494-499.
[77] S. Mota, E. Ros, E. M. Ortigosa, and F. J. Pelayo, "Bio-inspired motion detection for a blind spot overtaking monitor," Int. Journal of Robotics and Automation, vol.19, no.4, pp.190-196, 2004.
[78] N. Ohta and K. Niijima, “Detection of approaching cars via artificial insect vision,” Electronics and Communications in Japan, vol.88, no.10, pp.57-65, 2005.
[79] J. D. Alonso, E. R. Vidal, A. Rotter, and M. Mühlenberg, "Lane-change decision aid system based on motion-driven vehicle tracking," IEEE Trans. on Vehicular Technology, vol.57, no.5, pp.2736-2746, 2008.
[80] D. Dooley, B. McGinley, C. Hughes, L. Kilmartin, E. Jones, and M. Glavin, "A blind-zone detection method using a rear-mounted fisheye camera with combination of vehicle detection methods," IEEE Transactions on Intelligent Transportation Systems, vol.PP, no.99, pp.1-15, 2015.
[81] J. Wang, G. Bebis, and R. Miller, "Overtaking vehicle detection using dynamic and quasi-static background modeling," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Reno, Nevada, Jun.20-26, 2005, pp.64-71.
[82] A. Ramirez, E. Ohn-Bar, and M. M. Trivedi, “Go with the flow: improving multi-view vehicle detection with motion cues,” in Proc. of IEEE International Conference on Pattern Recognition, Stockholm, Sweden, Aug.24-28, 2014, pp.1051-4651.
[83] S. Baek, H. Kim, and K. Boo, “Robust estimation of vehicle recognition on curved roads using a rear-side view vision system,” International journal of precision engineering and manufacturing, vol.15, issue 4, pp.753-760, 2014.
[84] J. Wang, G. Bebis, and R. Miller, "Overtaking vehicle detection using dynamic and quasi-static background modeling," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Reno, Nevada, Jun.20-26, 2005, pp.64-71.
[85] A. Jazayeri, H. Cai, J. Y. Zheng, and M. Tuceryan, “Vehicle detection and tracking in car video based on motion model,” IEEE Transaction on Intelligent Transportation Systems, vol.12, issue 2, pp.583-595, 2011.
[86] J. Choi, Realtime on-road vehicle detection with optical flows and Haar-like feature detectors, Technical Report, University of Illinois at Urbana-Champaign, 2012.
[87] L. Van, B. Marinus, and C. A. G. Frans, "Vehicle detection with a mobile camera: spotting midrange, distant, and passing cars," IEEE Robotics and Automation Magazine, vol.12, no.1, pp.37-43, 2005.
[88] P. Lenz, J. Ziegler, A. Geiger, and M. Roser, "Sparse scene flow segmentation for moving object detection in urban environments," in Proc. of IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, Jun.5-9, 2011, pp.926-932.
[89] K. Kiratiratanapruk and S. Siddhichai, "Vehicle detection and tracking for traffic monitoring system," in Proc. of IEEE Region 10 Conference 2006, Hong Kong, China, Nov.14-17, 2006, pp.1-4.
[90] Y.-C. Kuo, N.-S. Pai, and Y.-F. Li, "Vision-based vehicle detection for a driver assistance system," Computers and Mathematics with Application, vol.61, no.8, pp.2096-2100, 2011.
[91] C. Ferández, D. F. Llorca, M. A. Sotelo, I. G. Daza, A. M. Hellín, and S. Álvarez, “Real-time vision-based blind spot warning system: experiments with motorcycles in daytime/nighttime conditions,” Int. Journal of Automotive Technology, vol.14, no.1, pp.113-122, 2013.
[92] J. Saboune, M. Arezoomand, L. Martel, and R. Laganiere, “A visual blindspot monitoring system for safe lane changes,” Lecture Notes in Computer Science in Image Analysis and Processing-ICIAP, vol.6979, pp.1-10, 2011.
[93] S. Tan, J. Dale, A. Anderson, and A. Johnston, "Inverse perspective mapping and optic flow: A calibration method and a quantitative analysis", Image and Vision Computing, vol.24, issue 2, pp.153-165, 2006.
[94] M.-H. Yang and N. Ahuja, “Gaussian mixture model for human skin color and its applications in image and video databases,” in Proc. IS&T/SPIE Conf. on Storage and Retrieval for Image and Video Databases VII, San Jose, CA, Jan.26-29, 1999, pp.458-466.
[95] A. Giachetti, M. Campani, and V. Torre, “The use of optical flow for road navigation,” IEEE Trans. on Robotics and Automation, vol.14, no.1, pp.34-48, 1998.
[96] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, "Performance of optical flow techniques," International journal of computer vision, vol.12, no.1, pp.43-77, 1994.
[97] J. Weng, N. Ahuja, and T. S. Huang, "Matching two perspective views," IEEE Trans. on Pattern Analysis & Machine Intelligence, vol.14, no.8, pp.806-825, 1992.
[98] S. M. Smith, "ASSET-2: Real-time motion segmentation and object tracking," Real-Time Imaging, vol.4, no.1, pp.21-40, 1998.
[99] D. Koller, N. Heinze, and H. H. Nagel, "Algorithmic characterization of vehicle trajectories from image sequences by motion verbs," in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, Jun.3-6, 1991, pp.90-95.
[100] J. Y. Bouguet, Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm, Technical Report, Intel Microprocessor Research Labs., 2001.
[101] J. Y. Bouguet, Pyramidal implementation of the Lucas Kanade feature tracker: Description of the algorithm, Technical Report, Intel Microprocessor Research Labs., 1999.
[102] S. J. Mason and N. E. Graham, “Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation,” Quarterly Journal of the Royal Meteorological Society, vol.128, pp. 2145-2166, 2002.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔