(34.201.11.222) 您好!臺灣時間:2021/02/25 04:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黃政揆
研究生(外文):Huang, Jheng-Kuei
論文名稱:利用雙鏡面環場影像攝影和超音波感測技術作戶外自動車學習與導航之研究
論文名稱(外文):A Study on Learning and Guidance for Outdoor Autonomous Vehicle Navigation by Two-mirror Omni-directional Imaging and Ultrasonic Sensing Techniques
指導教授:蔡文祥蔡文祥引用關係
指導教授(外文):Tsai, Wen-Hsiang
學位類別:碩士
校院名稱:國立交通大學
系所名稱:多媒體工程研究所
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:英文
論文頁數:113
中文關鍵詞:環場影像導航路徑學習立體式攝影機避碰
外文關鍵詞:stereo omni-cameraavoidance of obstaclesguidancenavigationpaths learningomni-image
相關次數:
  • 被引用被引用:2
  • 點閱點閱:341
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:85
  • 收藏至我的研究室書目清單書目收藏:0
隨著電腦視覺技術之發展,立體式攝影機逐漸受到歡迎。本研究使用一種新型的立體攝影機與新的引導技術來建製一台機械導盲犬,用以帶領使用者在人行道環境中行走。
在本論文中,我們提出一個新型立體式攝影機的設計方法與公式,讓使用者可以輕易地設計一支立體式攝影機。接著提出一種基於空間對映法的攝影機校正方法來校正此立體式攝影機。基於同軸旋轉不變性質,我們提出適用於此新型立體攝影機的立體資訊之計算方法;不同於其他計算方法,本系統無須將環場影像轉換為全景影像即可計算影像對映點與立體資訊。此外自動車在行走中累積的機械誤差會影響導航時的計算,對此我們亦提出一個誤差校正模型以解決問題。接著我們發展一動態調整相機感光值的方法與一動態調整參考值的方法,來適應環境中不均勻亮度的問題。
我們在本系統中以人行道之路延石作為行走時的特徵點,提出了兩種擷取特徵點的方法。當自動車進行學習時,系統計算特徵點的立體值,找出行走時的方向與距離並據以自動行走,同時自動紀錄與分析路徑點,以建立環境地圖。此外我們亦提出一種人機互動的技術,允許使用者在任何時候都可以手勢控制自動車,此時系統將關閉計算特徵點的程序,並進行盲走之程序。
當自動車進行導航模式時,我們亦提出一種分析串列超音波訊號的方法,使自動車能配合使用者的速度,調整自身行走速度並帶領使用者於環境中行走。接著我們提出一種改良的閃避障礙物之方法與自動車座標計算之方法,讓自動車能在環境中判斷障礙物的高度,並進行閃避。最後我們提出相關的實驗結果證明本系統的完整性與可行性。

With the progress of development in computer vision technologies, 3D stereo cameras nowadays become more popular than in the past. In this study, a new imaging device and new guidance techniques are proposed to construct an autonomous vehicle for use as a robot guide dog navigating on sidewalks to guide the blind people.
A general formula for designing a new stereo camera consisting of two mirrors and a single conventional projective camera is proposed. People can use the formula to design other stereo cameras easily. Then, a calibration technique based on a so-called pano-mapping technique for this type of camera is proposed. Using an autonomous vehicle to navigate in the environment, the incrementally increasing mechanical error is a big problem in the experiment. A calibration model based on the curve fitting technique is proposed to correct such errors. Also, a 3D data acquisition technique using the proposed two-mirror omni-camera based on the rotational invariance property of the omni-image is proposed. The 3D data can be obtained directly without transforming taken omni-images into panoramic images.
The autonomous vehicle is designed to follow the curb line of the sidewalk using the line following technique. In the path learning procedure, two methods are proposed to extract the curbstone feature points. If there exits no curbstone features or the features are hard to extract, a new human interaction technique using hand pose position detection and encoding is proposed to issue a user’s guidance command to the vehicle. To adapt the adopted image processing operations to the varying light intensity condition in the outdoor environment, two techniques, called dynamic exposure adjustment and dynamic threshold adjustment are proposed. To create a path map, a path planning technique is proposed, which reduces the number of the resulting path nodes in order to save time in path corrections during navigation sessions.
In the navigation procedure after learning, there may exits unexpected obstacles blocking the navigation path. A technique using a concept of virtual node is proposed to design a new path to avoid the obstacle. Finally, to allow the vehicle to guide a blind person to walk smoothly on a sidewalk, a sonar signal processing scheme is proposed for synchronization between the speed of the vehicle and that of the person, which is based on computation of the location of the vehicle with respect to the person using the sonar signals.
A series of experiments were conducted on a sidewalk in the campus of National Chiao Tung University. And the experimental results show the flexibility and feasibility of the proposed methods for the robot guide dog application in the outdoor environment.

ABSTRACT iii
CONTENT vi
LIST OF FIGURES ix
LIST OF TABLES xiii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Survey of Related Works 2
1.2.1 Types of guide dog robot 2
1.2.2 Types of omni-camera 3
1.2.3 Types of stereo omni-camera 4
1.2.4 Different learning and guidance methods for autonomous vehicles 5
1.3 Overview of Proposed System 6
1.4 Contributions of This Study 7
1.5 Thesis Organization 9
Chapter 2 Proposed Ideas and System Configuration 10
2.1 Introduction 10
2.1.1 Idea of proposed learning methods for outdoor navigation 10
2.1.2 Idea of proposed guidance methods for outdoor navigation 12
2.2 System configuration 13
2.2.1 Hardware configuration 13
2.2.2 System configuration 17
Chapter 3 Design of a New Type of Two-mirror Omni-camera 23
3.1 Review of Conventional Omni-cameras 23
3.1.1 Derivation of Equation of Projection on Omni-image 24
3.2 Proposed Design of a Two-mirror Omni-camera 26
3.2.1 Idea of design 26
3.2.2 Details of design 27
3.2.3 3D data acquisition 34
Chapter 4 Calibrations of Vehicle Odometer and Proposed Two-mirror Omni-camera 37
4.1 Introduction 37
4.2 Calibration of Vehicle Odometer 37
4.2.1 Idea of proposed odometer calibration method 37
4.2.2 Problem definition 37
4.2.3 Proposed curve fitting for mechanical error correction 38
4.2.4 Proposed calibration method 41
4.3 Calibration of Designed Two-mirror Omni-camera 42
4.3.1 Problem definition 43
4.3.2 Idea of proposed calibration method 43
4.3.3 Proposed calibration process 44
4.4 Experimental Results 51
Chapter 5 Supervised Learning of Navigation Path by Semi-automatic Navigation and Hand Pose Guidance 56
5.1 Idea of Proposed Supervised Learning Method 56
5.2 Coordinate Systems 57
5.3 Proposed Semi-automatic Vehicle Navigation for Learning 58
5.3.1 Ideas and problems of semi-automatic vehicle navigation 58
5.3.2 Adjustment of the exposure value of the camera 59
5.3.3 Single-class classification of HSI colors for sidewalk detection 62
5.3.4 Proposed method for guide line detection 64
5.3.5 Line fitting technique for sidewalk following 69
5.3.6 Proposed method for semi-automatic navigation 72
5.4 Detection of Hand Poses as Guidance Commands 73
5.4.1 Idea of proposed method for hand pose detection 73
5.4.2 Use of YCbCr colors for hand pose detection 74
5.4.3 Proposed hand shape fitting technique for hand pose detection 77
5.4.4 Proposed dynamically thresholding adjustment 80
5.4.5 Proposed hand pose detection process 81
5.5 Proposed Path Planning Method Using Learned Data 82
5.5.1 Idea of path planning 82
5.5.2 Proposed path planning process 83
Chapter 6 Vehicle Guidance on Sidewalks by Curb Following 85
6.1 Idea of Proposed Guidance Method 85
6.1.1 Proposed synchronization method of vehicle navigation and human walking speeds 85
6.2 Proposed Obstacle Detection and Avoidance Process 88
6.2.1 Proposed method for computation of the vehicle position 88
6.2.2 Detection of obstacles 90
6.2.3 Ideas of proposed method for obstacle avoidance 93
6.2.4 Proposed method for obstacle avoidance 96
Chapter 7 Experimental Results and Discussions 99
7.1 Experimental Results 99
7.2 Discussions 104
Chapter 8 Conclusions and Suggestions for Future Works 107
8.1 Conclusions 107
8.2 Suggestions for Future Works 108
References 110

[1] J. Borenstein and I Ulrich, “The GuideCan - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians,” Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, Apr. 21-27, 1997, pp. 1283-1288.
[2] C. C. Sun and M. C. Su, “A Low-Cost Travel-Aid for the Blind,” M. S. Thesis, Department of Computer Science and Information Engineering, National Central University, Jhongli, Taoyuan, Taiwan, June 2005.
[3] S. Tachi and K. Komority, “Guide dog robot,” 2nd Int. Congress on Robotics Research, pp. 333-340. Kyoto, Japan, 1984.
[4] The Robot World.
http://www.robotworld.org.tw/
[5] National Yunlin University of Science and Technology.
http://www.swcenter.yuntech.edu.tw/
[6] J. Kannala and S. Brandt, “A Generic Camera Calibration Method for Fish-Eye Lenses,” Proceedings of the 17th International Conference on Pattern Recognition, Vol. 1, pp. 10-13, August 2004; Cambridge, U.K.
[7] C. J. Wu, “New Localization and Image Adjustment Techniques Using Omni-Cameras for Autonomous Vehicle Applications,” Ph. D. Dissertation, Institute of Computer Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, July 2009.
[8] S. K. Nayar, “Catadioptric Omni-directional Camera,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 482-488, June 1997, San-Juan, Puerto Rico.
[9] S. Baker and S. K. Nayar, “A Theory of Single-Viewpoint Catadioptric Image Formation,” International Journal of Computer Vision, Vol. 35, No. 2, pp. 175-196, November 1999.
[10] H. Ukida, N. Yamato, Y. Tanimoto, T. Sano and H. Yamamoto, “Omni-directional 3D Measurement by Hyperbolic Mirror Cameras and Pattern Projection,” Proceedings of IEEE International Instrumentation and Measurement Technology Conference, Victoria, Vancouver Island, Canada, May12-15, 2008.
[11] Z. Zhu, “Omnidirectional Stereo Vision,” 10th IEEE ICAR, August 22-25, 2001, Budapest, Hungary.
[12] L. He, C. Luo, F. Zhu, Y. Hao, J. Ou and J. Zhou, “Depth Map Regeneration via Improved Graph Cuts Using a Novel Omnidirectional Stereo Sensor,” Proceedings of 11th IEEE International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Oct. 14-21, pp 1-8.
[13] S. Yi and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,“ Proceedings of 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, Aug. 20-24, 2006.
[14] G. Jang, S. Kim and I. Kweon, “Single Camera Catadioptric Stereo System,” Proceeding of Workshop on Omnidirectional ,Vision, Camera Networks and Nonclassical cameras(OMNIVIS2005), 2005.
[15] K. C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 3D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, June 2007.
[16] J. Y. Wang and W. H. Tsai, “A Study on Indoor Security Surveillance by Vision-based Autonomous Vehicles with Omni-cameras on House Ceilings,” M. S. Thesis, Institute of Computer Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June. 2009.
[17] S. Y. Tsai and W. H. Tsai, "Simple automatic path learning for autonomous vehicle navigation by ultrasonic sensing and computer vision techniques," Proceedings of 2008 International Computer Symposium, vol. 2, pp. 207-212, Taipei, Taiwan, Republic of China.
[18] K. T. Chen and W. H. Tsai, "A study on autonomous vehicle guidance for person following by 2D human image analysis and 3D computer vision techniques," Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, Republic of China.
[19] M. F. Chen and W. H. Tsai, "Automatic learning and guidance for indoor autonomous vehicle navigation by ultrasonic signal analysis and fuzzy control techniques," Proceedings of 2009 Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, National Computer Symposium, pp. 473-482, Taipei, Taiwan, Republic of China.
[20] Y. T. Wang and W. H. Tsai, “Indoor security patrolling with intruding person detection and following capabilities by vision-based autonomous vehicle navigation,” Proceedings of 2006 International Computer Symposium (ICS 2006) – International Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, Taipei, Taiwan, Republic of China, December 2006.
[21] K. L. Chiang and W. H. Tsai, “Security Patrolling and Danger Condition Monitoring in Indoor Environments by Vision-based Autonomous Vehicle Navigation,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.
[22] S. W. Jeng and W. H. Tsai, "Using pano-mapping tables to unwarping of omni-images into panoramic and perspective-view Images," Proceeding of IET Image Processing, Vol. 1, No. 2, pp. 149-155, June 2007.
[23] J. Gluckman, S. K. Nayar and K. J. Thoresz, “Real-Time Omnidirectional and Panoramic Stereo,” Proceeding of Image Understanding Workshop, vol. 1, pages 299–303, 1998.
[24] The MathWorks.
http://www.mathworks.com/access/helpdesk/help/toolbox/images/f8-20792.html
[25] The Dimensions of Colour by David Briggs.
http://www.huevaluechroma.com/093.php
[26] M. C. Chen and W. S. Tsai, “Vision-based security patrolling in indoor environments using autonomous vehicles,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
1. 李良哲(1997)。國內中年人關心的生活課題之探討研究。教育與心理研究,20,142-180。
2. 郭士賢、張思嘉(2004)。華人生活世界中的多面向因果思維。本土心理學研究 21,233-267。
3. 林以正、黃金蘭(2006)。 親密感之日常社會互動基礎:一個縱貫式的研究。中華心理學刊,48(1),35-52。
4. 簡里娟(2007)。論親密關係中的投射性認同。輔導季刊,43(2), 39-47。
5. 臺美光(2004)。女人與中年的對話。應用心理研究,24,215-244。
6. 卓紋君(2000b)。從兩性關係發展模式談兩性親密關係的分與合(下)。諮商與輔導, 175,19-23。
7. 沈瓊桃、陳姿勳(2004)。家庭生命週期與婚姻滿意度關係之探討。社會政策與社會工作學刊,8(1),133-170。
8. 劉惠琴(2003)。夫妻衝突調適歷程的測量。中華心理衛生學刊,16(1), 23-50。
9. 高旭繁、陸洛(2006)。夫妻傳統性/現代性的契合與婚姻適應之關聯。本土心理學研究,25, 47-100。
10. 畢恆達(1995)。生活經驗研究的反省: 詮釋學的觀點。本土心理學研究,4,224-259。
11. 姚蘊慧(2005)。「第二現代」社會觀點下的親密關係(Intimacy In The "Second Modern Society")。 通識研究集刊,8,149-169。
12. 龔卓軍(2005)。生病詮釋現象學:從生病經驗的詮釋到醫病關係的倫理基礎。生死學研究,1, 97-129。
13. 利翠珊、蕭英玲(2008)。華人婚姻品質的維繫:衝突與忍讓的中介效果。本土心理學研究,29,77-116。
14. 高淑清(2001)。在美華人留學生太太的生活世界:詮釋與反思。本土心理學研究,16,225-285。
15. 利翠珊、陳富美(2004)。配偶親職角色的支持與分工對夫妻恩情的影響。本土心理學研究, 21, 49-83。
 
系統版面圖檔 系統版面圖檔