跳到主要內容

臺灣博碩士論文加值系統

(3.87.33.97) 您好!臺灣時間:2022/01/27 16:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蔡嘉宏
研究生(外文):Jia-Hong Tsai
論文名稱:應用影像切割技巧建立室內通道的導航系統
論文名稱(外文):A vision-based navigation system for corridor environment
指導教授:李祖添李祖添引用關係
指導教授(外文):Tsu-Tian Lee
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電機與控制工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:57
中文關鍵詞:自走車區域成長法
相關次數:
  • 被引用被引用:4
  • 點閱點閱:217
  • 評分評分:
  • 下載下載:46
  • 收藏至我的研究室書目清單書目收藏:0
自走車導航系統必須具備在未知環境下擷取有用資訊的能力。在這篇文章中我們提出一個不需預先對環境瞭解的視覺導航系統,這個系統不但能使自走車朝正確的方向行駛還具備辨識轉角及門的功能。我們運用一改善後的區域成長法並利用地板與背景顏色不同的特徵來擷取地板的部分,此種方法是藉由成長的狀況作為是否過度成長的依據,此種方法成功的解決了過度成長的問題。接著我們以邊緣強化法取出我們要的邊緣,再以邊緣的長度、位置及期間的顏色來做為門的判定,最後以模組比對法來辨識走廊轉角的位置。因此我們可以得到室內導航所需的資訊。
Autonomous land vehicle (ALV) navigation requires to extract meaningful information in unknown environments. In this thesis, we present a vision-based navigation system in indoor environment without prior information. Our approach enables ALV to move toward the direction of vanishing point and to achieve doors recognition and junction detection. Because there are different colors in floor and wall in indoor environments, we extract the region of floors by region-based color image segmentation technique. And we propose a converge-guarantee region growing method to improve the drawback of the conventional region-growing algorithm. This algorithm can solve the problem suffers from over-growing by detecting the number of growing pixels in each growing layer. Doors are recognized by means of positions, geometric relationships of vertical edges and color values between two vertical edges. Furthermore, junctions of corridor can be identified by template matching technique. Finally, we extract junctions, doors, and vanishing point for navigation in corridors.
致謝 i
Contents ii
Abstract in Chinese iv
Abstract in English v
List of Figure vi
List of Table viii
1 Introduction 1
1.1 Motivation………………………………………………………1
1.2 Background of our approach…………………………………1
1.2.1 Color space…………………………..…………………1
1.2.2 Image segmentation technique…………………………2
1.3 Literature survey of related works……………………………….2
1.3.1 Indoor environments navigation systems………….…2
1.3.2 Outdoor environments navigation systems…………..3
1.3.3 Image segmentation techniques……………………….5
1.4 Brief sketch of the contents………………………………….6
2 Image process techniques 7
2.1 Color space………………………………………………...7
2.1.1 R-G-B color space……………………….……..……...7
2.1.2 Y-I-Q color space……………………………..……….8
2.1.3 H-S-I color space……………………..………………..8
2.2 Edge detection……………………………………….……9
2.2.1 Sobel operator…………………………………………10
2.2.2 Prewitt operator………………………………………..10
2.3 Size filter……………………………………………………..11
2.4 Component labeling for clustering………………………11
2.4.1 Recursive Algorithm…………………………………...12
2.4.2 Sequential Algorithm………………………………….12
2.5 Image segmentation techniques……………………….13
2.5.1 Thresholding method………………………………13
2.5.2 Region-growing method…………………………..14
2.5.3 Pixels clustering classifier……………………….15
2.6 Pixels clustering classifier……………………………….16
2.6.1 Chain codes………………………………………..16
2.6.2 Template matching………………………………...17
2.7 Distance estimation……………………………………..17
2.7.1 Stereo imaging method………………………….17
2.7.2 Single image method………………………..…..19
3 Vision System Structure 20
3.1 Image data and data compression……………………….....20
3.2 Threshold value generator………………………..………….22
3.3 Road region extractor………………………………………..24
3.3.1 Homogeneous function……………………………26
3.3.2 Outline of Self-converge algorithm……………..26
3.3.3 Road region segmentation process………..……28
3.4 Vanishing point extractor…………………………………….31
3.4.1 Roadsides extraction………………………………..31
3.4.2 Vanishing point………………………………….….32
3.5 Corridor junction detection system…………………….33
3.5.1 Pre-process for junction recognition…………..33
3.5.2 Template matching………………………………..35
3.6 Door recognition system………………………………..36
3.6.1 Vertical edges extractor………………………….37
3.6.2 Noises filter………………………………………38
3.6.3 Feature matching system……………………....39
4 Experiment Results and Discussion 42
4.1 Comparison of different color spaces with conventional region growing method……………………………………………..42
4.2 Performances in different segmentation techniques……………44
4.3 Experimental results of our navigation system…….…………..47
4.3.1 Junction recognition……………………………….47
4.3.2 Discussion of door recognition……………………49
4.3.3 Experimental results in corridors……………….50
5 Conclusions 52
References 54
References
[1] E. Stella, F. P. Lovergine, L. Caponentti, A. Distante, “Mobile robot navigation using vision and odometry,” IEEE Int. Vehicles Symposium, pp. 417 —422, 1994.
[2] H. Ishiguro, T. Miyashita, S. Tsuji, “T-Net for navigating a vision-guided robot in a real world,” IEEE Int. Conf. Robotics and Automation, Vol.1, pp.1068 —1073, 1995.
[3] G. Adorni, G. Destri, M. Mordonini, “Indoor vehicle navigation by means of signs,” IEEE Int. Vehicles Symposium, pp.76 —81, 1996.
[4] Y. Abe, T. Fukuda, F. Arai, Y. Yokoyama, Y. Tanaka, “Vision based navigation system considering error recovery for autonomous mobile robot,” IEEE Int. Conf. Robotics and Automation, vol.3, pp.1993 —1998, 1997.
[5]R. sim, “Mobile robot localization using learned landmarks,” Master’s thesis Dept, Comput, Sci., McGill Univ, Montreal, PQ, Canada, July 1998.
[6] T. Tsubouchi, S. Yuta, “The Map Assisted Mobile Robot''s Vision System - An Experiment On Real Time Environment Recognition,” IEEE International Workshop Intelligent Robots, pp.659 —664, 1988.
[7] H. Ishiguro, M. Yamamoto, S Tsuji, “Omni-Directional Stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.14, Issue.2, pp.257 —262, Feb 1992.
[8] Y. Matsumoto, K. Ikeda, M. Inaba, H. Inoue, “Exploration and navigation in corridor environment based on omni-view sequence,” IEEE Trans. Intelligent Robots and Systems, vol.2, pp.1505 —1510, 2000.
[9] T. Lixin, S. Yuta, “Vision based navigation for mobile robots in indoor environment by teaching and playing-back scheme,” IEEE Int. Conf. Robotics and Automation,. vol.3, pp.3072 —3077, 2001.
[10] G.Y. Chen, W.H. Tsai, “A incremental-learning-by-navigation approach to vision-based autonomous land vehicle guidance in indoor environments using vertical line information and multi weighted generalized Hough transform techniques,” IEEE Trans. Systems, Man and Cybernetics, vol.2, Issue.5, pp.740 —748, Oct 1998.
[11] R. Schuster, N. Ansari, and A. Bani-Hashemi, “Steering a robot with vanishing points,” IEEE Trans. Robot Automat, vol 9, pp. 491-498, Apr 1993.
[12] Z.F. Yang, W.H. Tsai, “Viewing corridors as right parallelepipeds for vision-based vehicle localization,” IEEE Trans Industrial Electronics, vol.46, Issue.3, pp.653 —661, June 1999.
[13] M. Tomono, S Yuta, “Mobile robot navigation in indoor environments using object and character recognition,” IEEE Int. Conf. Robotics and Automation,vol.1, pp. 313 —320, 2000.
[14] S. Segvic, S. Ribaric, “Determining the absolute orientation in a corridor using projective geometry and active vision,” IEEE Trans. Industrial Electronics, vol.48, Issue.3, pp.696 —710, June 2001.
[15] G. Lancey, S. MacNamara “Context-aware shared control of a robot mobility aid for the elderly blind,” The international journal of robotics research, vol. 19, No. 11, pp. 1054-1065, Nov 2000.
[16] S. MacNamara, G. Lacey, “A smart walker for the frail visually impaired,”
IEEE Int. Conf. Robotics and Automation, vol.2, pp.1354-1359, 2000.
[17] V. Beranger, J-Y. Herve, “Recognition of intersections in corridor environment,” IEEE Int. Conf. Pattern Recognition, vol.4, pp.133 —137, 1996.
[18] L-X. Zhou, X-Q. Ye, W-K. Gu, “ALV structural road following system” IEEE Int. Conf. Signal Processing, vol.2, pp.906 —909, 1996.
[19] A. Broggi, “Robust real-time lane and road detection.” IEEE International Symposium. Computer Vision, pp. 353 —358, 1995.
[20] P. Charbonnier, F. Diebolt, Y. Guillard, F. Peyret, “Road markings recognition using image processing,” IEEE Int. Conf. Intelligent Transportation System, pp.912 —917, 1997.
[21] C. Thorpe, M.H. Hebert, T. Kanade, S.A. Shafer, “Vision and navigation for the Carnegie-Mellon Navlab,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.10, Issue.3, pp.362 —373, May 1988.
[22] J.P. Liu, H.G. He, Y.F. Liu, A.M. Huan, Z. Gao, X.L. Liu, “The road segmentation for ALV navigation,” IEEE/RSJ/GI Int. Conf., vol.2, pp.974 —979, 1994.
[23] Z. Nan, J.D. Crisman, “Categorical color projection for robot road following,” IEEE Int. Conf. Robotics and Automation, vol.1, pp.1080 —1085, 1995.
[24] R. Aufrere, R. Chapuis, F. Chausse “A dynamic vision algorithm to locate a vehicle on nonstructured road,” International journal of robotics research, vol.19, no.5, pp. 411-423, May 2000.
[25] K. Ohno, T. Tsubouchi, S. Maeyama, S. Yuta, “A mobile robot campus walkway following with daylight-change-proof walkway color image segmentation,” IEEE/RSJ Int. Conf, vol.1, pp.77 —83, 2001.
[26] M.A. Turk, D.G. Morgenthaler, K.D. Gremban, M. Marra, “VITS- A vision system for autonomous land vehicle navigation,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.10, Issue.3, pp.342 —361, May 1988.
[27] S.J. Hennessy, R.H. King, “Feature mining technology Spinoffs from the ALV program,” IEEE Trans. Industry Applications, vol.25, Issue.2, pp.377—384, March-April 1989.
[28] X. Ye, J. Liu, W. Gu, “An integrated vision system for ALV navigation,” International journal of pattern recognition an artificial intelligence, vol.14, no.7, pp. 929-940, 2000.
[29] X. Lin, S. Chen, “Color image segmentation using modified HSI system for road following,” IEEE Int. Conf. Robotics and Automation, vol.3, pp. 1998 —2003, 1991.
[30] A. Broggi, S. Berte, “A morphological model-driven approach to real-time road boundary detection for vision-based automotives systems,” IEEE Workshop Applications of Computer Vision, pp.73 —80, 1994.
[31] K.I. Kim, S.Y. Oh, S.W. Kim, H. Jeong, C.N. Lee, B.S. Kim, C.S. Kim, “An autonomous land vehicle PRV III,” IEEE. Intelligent Vehicles Symposium, pp.159 —164, 1996.
[32] K.I. Kim, S.Y. Oh, J.S. Lee, J.H. Han, C.N. Lee, “An autonomous land vehicle: Design concept and preliminary road test results,” IEEE Intelligent Vehicles Symposium, pp.146 —151, 1993.
[33] A. Broggi, “Parallel and local feature extraction: A real-time approach to road boundary detection,” IEEE Trans. Image Processing, vol.4, Issue.2, pp.217 —223, Feb 1995.
[34]A.C. She, T.S. Huang, “Segmentation of road scenes using color and fractal-based texture classification,” IEEE Int. Conf. Image Processing, vol.3, pp.1026 —1030, 1994.
[35] S. Konishi, A.L. Yuille, “Statistical cues for domain specific image segmentation with performance analysis,” IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp.125 —132, 2000.
[36] Y. Nobuyuki, “A threshold selection method from gray-level histogram,” IEEE Transactions on Systems, Man, and Cybernetics, vol.SMC-9, No.1, pp.62-66, 1979.
[37] H.D. Cheng, X.H. Jiang and J. Wang, “Color image segmentation based on homogram thresholhing and region merging,” Pattern Recognition, 35(2002), 373-393.
[38] X.D. Yang and V. Gupta, “An improved threshold selection method for image segmentation,.” IEEE Canadian Conference, Electrical and Computer Engineering, vol.1, pp.531 —534, 1993.
[39] K.V. Mardia, T.J. Hainsworth, “A spatial thresholding method for image segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.10, Issue.6, pp.919 —927, Nov 1988.
[40] A. Mehnert, P. Jackway, “A improved seeded region growing algorithm,” Patter Recognition Letters, 18 (1997), 1065-1071.
[41] R. Adams, L. Bischof, “Seeded region growing,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol.16, Issue.6, pp.641 —647, June 1994.
[42] X. Cufi, X. Munoz, J. Freixenet, J. Marti, “A concurrent region growing algorithm guided by circumscribed concours” IEEE Int. Conf. Pattern Recognition, vol.1, pp.432 —435, 2000.
[43] N. Ikonomatakis, K.N. Plataniotis, M. Zervakis, A.N. Venetsanopoulos, “Region growing and region merging image segmentation” IEEE Int. Conf. Digital Signal Processing Proceedings, vol.1, pp.299 —302, 1997.
[44] C. Revol, M. Jourlin, “A new minimum variance region growing algorithm for image segmentation,” Pattern Recognition Letters, 18 (1997), 249-258.
[45] A. Tremeau, N. Borel, “A region growing and merging algorithm to color segmentation,” Pattern Recognition, vol.30, no.7, pp.191-120, 1997.
[46] N. Ikonomatakis, K.N. Plataniotis, A.N. Venetsanopoulos,. “A region-based color image segmentation scheme,” Dept. of Electrical and Computer Engineering, University of Toronto, Toronto Canada.
[47] J. Xuan, T. Adali, Y. Wang, “Segmentation of magnetic resonance brain image: integrating region and edge detection,” IEEE Int. Conf. Image Processing, vol.3, pp.544 —547, 1995.
[48] Y.W Yu, J.H. Wang, “Image segmentation based on region growing and edge detection,” IEEE Int. Conf. Systems, Man and Cybernetics, vol.6, pp.798-803, 1999.
[49] D. Zugaj, V. Lattuati, “A new approach of color images segmentation based on fusing region an edge segmentation outputs,” Patter Recognition, vol.31, no.2, pp.105-113, 1998.
[50] C. Thorpe, “Vision and navigation: The Carnegie Mellon Navlab” Ed. Norwell, MA: Kluwer, 1990.
[51] C. Gonzalez, E. Woods, “Digital image processing” MA: Addison-Wesley.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top