跳到主要內容

臺灣博碩士論文加值系統

(98.80.143.34) 您好!臺灣時間:2024/10/04 17:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:周達華
研究生(外文):Tat-wa Chao
論文名稱:廣域全周俯瞰監視與影像式倒車導引
論文名稱(外文):Wide-scoped Top-view Monitoring and Image-based Parking Guiding
指導教授:曾定章曾定章引用關係
指導教授(外文):Din-chang Tseng
學位類別:碩士
校院名稱:國立中央大學
系所名稱:生物醫學工程研究所
學門:工程學門
學類:生醫工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:英文
論文頁數:123
中文關鍵詞:特徵匹配相機校正廣角攝影停車輔助倒車導引廣域全周監視
外文關鍵詞:Parking assistanceParking guidingWide-scoped top-view monitoringFeature matchingCamera calibration
相關次數:
  • 被引用被引用:1
  • 點閱點閱:467
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
為了實現車輛周遭視線無死角的概念,各車廠在轎車的前後及兩側架設廣角攝影機,把所拍攝到的四個影像經由轉換及拼接,合成出一個俯視車輛周遭環境的影像提供給駕駛者使用,藉以達到安全停車輔助之目的。而現有的環場監控停車輔助系統所提供給駕駛者的視覺輔助,其實還有提升空間。就現有監控系統中四個影像之間的黑色接合線而言,在影像的重疊部份,可以作適當的色彩混合及亮度一致化,使所產生出來的拼接影像整體平順自然。而就拼接影像的可監控範圍而言,現有的環周監控停車輔助系統的可監控區域為介於車外的1.5到3米之間的地面,而攝影機實際拍攝到的原始影像中,除了車身旁邊的地面區域外,還有拍到高出地面的景觀,例如是更遠處的來車、行人、障礙物等。這些景物會因鳥瞰轉換後,影像中高出地面的景物會被轉換函數沿影像中心向外嚴重拉扯成失真影像,因此無法被現有的鳥瞰監控系統所利用。
本研究就以上問題,建立了一組影像徑向的非等比縮放函數,把失真的影像處理成可利用的有效影像,為駕駛提供更廣範圍的全周監控,並建構3D 模型,把現有2D的視覺輔助提升到能任意改變監控視角的3D環場視覺監控。經過一連串的影像轉換及合成處理,得到三個不同的監控模式給駕駛者使用。
除此之外,我們所提出的影像式倒車導引技術。在倒車影像中尋找一些地面特徵點,藉由追蹤這些特徵點在前後影像中的移動量,反推出車輛的行進軌跡。這項技術可代替現有倒車輔助系統中方向盤下方轉向感測器的工能,用影像中地面特徵點的運動關係代替轉向感測器偵測方向盤轉向角換算出行車軌跡的工作,以軟體取代硬體。只需要安裝四台廣角攝影機,就可以同時達到廣域全周府瞰監視及倒車軌跡估算的工能。不但節省設備成本,還可以讓駕駛在車輛出廠後能選購倒車導引系統,而不需要再把車輛開回原廠作安裝轉向感測器的作業流程,使倒車導引技術應用大眾化,能適應在不同車種上使用。
To improve the current surrounding monitor systems, we enhance the 2-D visual assistance into a wide-scoped 3-D visual assistance and provide a serviceable method to obtain the trajectory of vehicle without utilize any steering sensor on steering column.
In the wide-scoped surrounding top-view monitor system, four wide-view cameras are mounted on front, rear, and the both sides of the vehicle to capture sequential images; then each four images are composed into a single surrounding top-view image. According to the quality of visual assistance, those current parking assistance systems can have better approach. In accordance with the monitor area of top-view image, the monitoring range of current top-view parking assistance systems are between 1.5 to 3 meters long outward from the vehicle, this monitoring range of those assistance systems were too limited to understand the whole surrounding traffic.
Different from the current parking assistance systems, the improved approach also provides the surrounding top-view monitoring, there are also provide the effective wide-scope surrounding top-view, surrounding title-view monitoring and image-based parking guiding based on the same equipment. The guiding technique is unnecessary to mount a steering sensor on the steering column, it only utilizes the images to calculate the vehicle trajectory and provide to the driver for reference. It reduces costs and increase products efficiency.
Abstract ii
Contents iii
List of Figures vi
List of Tables xi
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 System overview 3
1.3 The thesis organization 7
Chapter 2 Related Works 8
2.1 Vehicle surrounding monitor systems 8
2.1.1 Nissan around view monitor 9
2.1.2 Honda multi-view camera system 10
2.1.3 Bird’s-eye vision system for vehicle surrounding monitoring 12
2.1.4 Omnidirectional cameras for backing-up aid 13
2.1.5 Monitoring surrounding areas of truck-trailer combinations 15
2.1.6 Omni video based approach 16
2.2 Image fusion 17
2.2.1 Recognising panoramas 18
2.2.2 Interactive digital photomontage 19
2.2.3 Poisson image editing 19
2.3 Parking guiding 20
2.3.1 Development of advanced parking assistance system 21
2.3.2 Odometry calibration of a Car-Like Mobile Robot 22
2.3.3 Light stripe projection based parking space detection for intelligent parking assist system 23
Chapter 3 Seamless Top-view Monitoring 24
3.1 Camera’s parameter calibration 24
3.1.1 Estimating internal parameters 29
3.1.2 Estimating external parameters 30
3.1.3 Calculate the maximum likelihood estimation 30
3.2 Wide-angle lens distortion correction 31
3.2.1 Distortion model 32
3.2.2 Estimation of distortion parameters 33
3.2.3 Estimation of optimal solution 34
3.3 Elimination for vignetting effect 35
3.3.1 Vignetting model 35
3.3.2 Parameter estimation for vignetting effect 36
3.3.3 Adjust the brightness 37
3.4 Top-view transformation 38
3.4.1 Top-view transformation with camera internal and external parameters 39
3.4.2 Top-view transformation with homography 40
3.5 Image registration 42
3.5.1 Geometric transformation 43
3.5.2 Calculation of rigid transformation 44
3.5.3 Interpolation and tabulation 45
3.6 Brightness uniformity 47
3.7 Color blending 48
Chapter 4 Wide-scoped Top-view Monitoring 50
4.1 Expand the useful images 50
4.1.1 The weakness of the traditional top-view monitoring 51
4.1.2 Improvement of top-view monitoring 52
4.2 Dual-camber modeling 55
4.2.1 The transformation from plane to camber surface 55
4.2.2 Smoothing the dual-camber model 56
Chapter 5 Image-based Parking Guiding 59
5.1 Trajectory model 59
5.2 Detecting feature points on the top-view image 60
5.3 Feature matching 63
5.3.1 Apply scale-invariant feature transform for feature matching 63
5.3.2 Apply sun of squared difference for feature matching 65
5.4 Calculation of geometric transformation 67
5.5 Calculate vehicular trajectory with property of centroid 69
Chapter 6 Experiments 73
6.1 Developmental environment 73
6.2 Camera calibration 76
6.3 Top-view transformation 78
6.4 Three selectable monitoring views 80
6.5 Image-base parking guiding 86
Chapter 7 Conclusion and Future works 89
7.1 Conclusion 89
7.2 Future works 91
References 92
[1]Agarwala, A., M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” in Proc. ACM SIGGRAPH, Los Angeles, CA, Aug.8-12, 2004, pp.294-302.
[2]Brown, D. C., “Close-range camera calibration,” Photogrammetric Engineering, vol.37, no.8, pp.855-866, 1971.
[3]Brown, M. and D. G. Lowe, “Recognising panoramas,” in Proc. 9th IEEE Int. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.1218-1225
[4]Catmull, E. and Rom, R., “A class of local interpolating splines,” in Proc. Computer Aided Geometric Design, New York, March.18-21, 1974, pp. 317-326.
[5]Claus, D. and A. W. Fitzgibbon, “A rational function lens distortion model for general cameras,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, CA, Jun.20-25, 2005, vol.1, pp.213-219.
[6]Devernay, F. and O. Faugeras, “Straight lines have to be straight,” Machine Vision and Application, vol.13, no.1, pp.14-24, 2001.
[7]Ehlgen, T. and T. Pajdla, “Monitoring surrounding areas of truck-trailer combinations,” in Proc. of 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
[8]Ehlgen, T., M. Thorn, and M. Glaser, “Omnidirectional cameras as backing-up aid,” in Proc. of IEEE Int. Conf. on Computer Vision., Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-5.
[9]Faig, W., “Calibration of close-range photogrammetry systems: Mathematical formulation,” Photogrammetric Engineering and Remote Sensing, vol.41, no.12, pp.1479-1486, 1975.
[10]Faugeras, O. and G. Toscani, “The calibration problem for stereo,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Miami Beach, FL, Jun. 1986, pp.15-20.
[11]Faugeras, O., T. Luong, and S. Maybank, “Camera self-calibration: Theory and experiments,” in Proc. of 2nd European Conf. on Computer Vision, Santa Margherita Ligure, Italy, May.19-22, 1992, vol.588, pp.321-334.
[12]Fischler, M. A. and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of The ACM, vol.24, pp.381-395, Jun. 1981.
[13]Fitzgibbon, A. W., “Simultaneous linear estimation of multiple view geometry and lens distortion,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Kauai, Hawaii, Dec.11-13, 2001, vol.1, pp.125-132.
[14]Fleck, M. M., Perspective Projection: The Wrong Imaging Model, Technical Report TR 95-01, Computer Science, University of Iowa, 1995.
[15]Ganapathy, S., “Decomposition of transformation matrices for robot vision,” Pattern Recognition Letters, vol.2, pp.401-412, 1984.
[16]Gandhi, T. and M. M. Trivedi, “Dynamic panoramic surround map: motivation and omni video based approach,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Washington DC, Jun.20-26, 2005, pp.61-69.
[17]Gennery, D., “Stereo-camera calibration,” in Proc. of 10th Image Understanding Workshop, Los Angeles, CA, Nov.7-8, 1979, pp.101-108.
[18]Geyer, C. and K. Daniilidis, “Catadioptric projective geometry,” International Journal of Computer Vision, vol.45, no.3, pp.223-243, 2001.
[19]Harris, C. and M. Stephens, “A combined corner and edge detector,” in Proc. of The Fourth Alvey Vision Conference, Manchester, Aug.31-Sep.2, 1988, pp.147-151.
[20] Jung, H. G., D. S. Kim, P. J. Yoon, and J. Kim, “Light stripe projection based parking space detection for intelligent parking assist system,” IEEE Intelligent Vehicles Symp., Istanbul, Turkey, June.13-15, 2007, pp. 962-968.
[21]Kang, S. B. and R. Weiss, “Can we calibrate a camera using an image of a flat, textureless lambertian surface?,” in Proc. European Conf. on Computer Vision, Dublin, Ireland, June. 26- July.1, 2000, pp.640-653.
[22] Lee, K., W. Chung, H. Chang, P. Yoon, “Odometry calibration of a car-like mobile robot” in Proc. Int. Conf. on Control, Automation and Systems, Seoul, Korea, Oct.17-20, 2007, pp. 684-689.
[23]Liu, Y. C., K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” in Proc. Conf. Robot Vision, Berlin, Germany, Feb. 18-20, 2008, pp.207-218.
[24]Lowe, D. G., "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol.60, pp.91-110, Nov. 2004.
[25]Lucas, B. D. and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. 7th Int. Joint Conf. on Artificial Intelligence, Vancouver, 1981, pp.674-679.
[26]Marquardt, D. “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol.11, pp.431-441, 1963.
[27]Maybank, S. J. and O. D. Faugeras, “A theory of self-calibration of a moving camera,” International Journal of Computer Vision, vol.8, no.2, pp.123-152, 1992.
[28]Moravec, H. P., “Towards automatic visual obstacle avoidance,” in Proc. 5th Int. Joint Conf. on Artificial Intelligence, Tokyo, 1977, pp.584.
[29]Perez, P., M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. on Graphics, vol.22, no.3, pp313-318, 2003.
[30]Slama, C. C., editor., Manual of Photogrammetry, 4th edition, American Society of Photogrammetry and Remote Sensing, Falls Church, Virginia, 1980.
[31]Tsai, R. Y., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation, vol.3, no.4, pp.323-344, 1987.
[32] Wada, M., K. S. Yoon, and H. Hashimoto, “Development of advanced parking assistance system,” IEEE Trans. on Industrial Electronics, vol. 50, no. 1, pp. 4-17, 2003.
[33]Wei, G. and S. Ma, “A complete two-plane camera calibration method and experimental comparisons,” in Proc. of 4th Int. Conf. on Computer Vision, Berlin, Germany, May 11-14, 1993, pp.439-446.
[34]Weng, J., P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.14, no.10, pp.965-980, 1992.
[35]Zhang, Z., “A flexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11, pp.1330-1334, 2000.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top