(3.230.173.249) 您好!臺灣時間:2021/04/18 08:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:邱文欣
研究生(外文):Wen-Hsin Chiu
論文名稱:基於深度學習之單眼距離估測與機器人戶外行走控制
指導教授:王文俊王文俊引用關係
學位類別:碩士
校院名稱:國立中央大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:70
中文關鍵詞:機器人控制避障控制深度學習單眼深度估測
外文關鍵詞:Robot controlObstacle avoidance controlDeep learningMonocular depth prediction
相關次數:
  • 被引用被引用:2
  • 點閱點閱:194
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:59
  • 收藏至我的研究室書目清單書目收藏:0
本論文設計並改良文獻[1]戶外導盲機器人的行走導引與避障功能,以幫助視障人士在戶外行走更為可靠。首先,使用者在手機上點選目的地,手機端透過Google map API規劃行徑路徑,藉由機器人當下與目的地的距離及偏航角,判斷並傳送直走、轉彎或停止的導航指令給主控制電腦。主控制電腦經由網路攝影機擷取影像,以語意分割技術辨識可行走的馬路區域,再利用深度學習技術對障礙物產生視差估測,將估測出的視差經由倒數方程式轉換為深度,再透過直方圖決定障礙物距離,此論文的距離估測,在0.8公尺到4公尺之間,約有80%的準確度。當辨識出可行走的馬路區域後,使用霍夫直線法畫出可行走之馬路右邊邊界線,並將道路區域劃分為多個區塊,每個區塊代表一段路徑,經由實驗找出適合的軌跡點。再利用模糊控制計算機器人左右輪的角速度,使機器人沿著軌跡點行進。但是行進中必須要有避障功能,所以上面所言深度學習加上視差轉換的方法,從網路單一攝影機的拍攝影像推算出0.8公尺到4公尺之間的障礙物距離。若有障礙物位在影像中心範圍內且距離機器人小於3.5公尺時,機器人會進行避障動作。若障礙物突然出現於前面1公尺範圍內時,機器人將停止移動,直到前方1公尺內沒有障礙物再繼續行走。藉由在戶外道路的實驗驗證,障礙物距離估測的準確率比[1]的結果提升,機器人行進控制也比[1]的方法更為穩定,使得導盲機器人能更準確且安全地到達目的地。
This thesis designs and improves the functions of moving guidance and obstacle avoidance for the guided robot from reference[1] such that the robot can be helpful to the blind much more in his/her daily life. First, the user clicks the destination on the cell phone, then the phone can plan the moving path for the robot by using Google map. According to the current position, attitude of the robot and the destination position, the phone will send the navigation command to the computer on the robot. This robot just uses one webcam to capture the image ahead, by using the semantic segmentation method and deep learning network, we can find the accessible road area and predict the disparity of the obstacle ahead of the robot, respectively. Based on the disparity and an inverse function, the depth map of the obstacle is obtained and the distance between the robot and obstacle is estimated from the analysis of the depth histogram. In this study, the distance from 0.8m to 4m can be estimated with 80% accuracy. When the accessible area is obtained, Hough line is created to present the border of the road at the right side of the robot. Let the accessible area of the road ahead of the robot be divided to several rectangular squares. Since the robot is forced to move along the right side of the road, then we can find the trajectory point in each square. By using fuzzy control technique, the speeds of both wheels are adjusted such that the robot can move following the trajectory points. Based on the above distance estimation for obstacles, when the obstacle is on the center of image and its estimated distance is about 3.5 m, the robot will start to avoid the obstacle; but it will stop when the obstacle suddenly appears 1m ahead, then it will move until the obstacle disappears. According to the outdoor experiment in NCU campus, the obstacle distance estimation is more accurate and the robot moving control is much more stable than that in [1] such that the robot can guide the blind reaching the destination safely and accurately.
摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 研究動機與背景 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4 論文架構 4
第二章 系統架構與硬體介紹 5
2.1 系統架構 5
2.2 硬體架構 7
2.2.1 機器人端 7
2.2.2 手機端 11
2.2.3 深度學習訓練資料蒐集之雙眼攝影機 11
第三章 深度學習之單眼距離估測 13
3.1 估測單張影像的深度學習網路架構 13
3.2 深度學習網路的訓練資料 15
3.3 視差轉深度的方程式 16
3.4 障礙物深度估測的計算方法 17
第四章 機器人路徑規劃與避障控制 20
4.1 手機導航 20
4.1.1 兩點經緯度間的距離 22
4.1.2 兩點經緯度間的角度 22
4.1.3 導航控制 22
4.2 機器人路徑規劃 24
4.2.1 靠右行走的參考線 25
4.2.2 直走 29
4.2.3 轉彎 35
4.2.4 避障 35
4.3 機器人移動之模糊控制 38
第五章 實驗結果 40
5.1 深度估測 40
5.2 機器人控制 44
5.2.1 手機導航 44
5.2.2 避障 46
5.2.3 直走 48
5.2.4 左轉 51
5.2.5 右轉 52
第六章 結論與未來展望 53
6.1 結論 53
6.2 未來展望 53
參考文獻 55
[1] 賴怡靜, "基於深度學習之距離估測與自動避障的戶外導航機器人," 碩士, 電機工程學系, 國立中央大學, 桃園縣, 2018.
[2] K. Karsch, C. Liu, and S. B. Kang, "Depth transfer: Depth extraction from video using non-parametric sampling," IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 11, pp. 2144-2158, 2014.
[3] D. Eigen, C. Puhrsch, and R. Fergus, "Depth map prediction from a single image using a multi-scale deep network," in Advances in neural information processing systems, 2014, pp. 2366-2374.
[4] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab, "Deeper depth prediction with fully convolutional residual networks," in 2016 Fourth international conference on 3D vision (3DV), 2016: IEEE, pp. 239-248.
[5] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[6] J. Zbontar and Y. LeCun, "Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches," Journal of Machine Learning Research, vol. 17, no. 1-32, p. 2, 2016.
[7] N. Mayer et al., "A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040-4048.
[8] J. Xie, R. Girshick, and A. Farhadi, "Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural networks," in European Conference on Computer Vision, 2016: Springer, pp. 842-857.
[9] J. Flynn, I. Neulander, J. Philbin, and N. Snavely, "Deepstereo: Learning to predict new views from the world's imagery," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5515-5524.
[10] R. Garg, V. K. BG, G. Carneiro, and I. Reid, "Unsupervised cnn for single view depth estimation: Geometry to the rescue," in European Conference on Computer Vision, 2016: Springer, pp. 740-756.
[11] C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, "Digging into self-supervised monocular depth estimation," arXiv preprint arXiv:1806.01260, 2018.
[12] V. Casser, S. Pirk, R. Mahjourian, and A. Angelova, "Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos," arXiv preprint arXiv:1811.06152, 2018.
[13] L. Doitsidis, A. Nelson, K. Valavanis, M. Long, and R. Murphy, "Experimental validation of a MATLAB based control architecture for multiple robot outdoor navigation," in Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005., 2005: IEEE, pp. 1499-1505.
[14] L. Doitsidis, K. P. Valavanis, and N. Tsourveloudis, "Fuzzy logic based autonomous skid steering vehicle navigation," in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), 2002, vol. 2: IEEE, pp. 2171-2177.
[15] G. Oriolo, G. Ulivi, and M. Vendittelli, "Real-time map building and navigation for autonomous robots in unknown environments," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 28, no. 3, pp. 316-333, 1998.
[16] C. Rusu, I. Birou, and E. Szöke, "Fuzzy based obstacle avoidance system for autonomous mobile robot," in 2010 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), 2010, vol. 1: IEEE, pp. 1-6.
[17] J. Levinson et al., "Towards fully autonomous driving: Systems and algorithms," in 2011 IEEE Intelligent Vehicles Symposium (IV), 2011: IEEE, pp. 163-168.
[18] B. Huval et al., "An empirical evaluation of deep learning on highway driving," arXiv preprint arXiv:1504.01716, 2015.
[19] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, "3-D mapping with an RGB-D camera," IEEE transactions on robotics, vol. 30, no. 1, pp. 177-187, 2013.
[20] J. Gaspar, N. Winters, and J. Santos-Victor, "Vision-based navigation and environmental representations with an omnidirectional camera," IEEE Transactions on robotics and automation, vol. 16, no. 6, pp. 890-898, 2000.
[21] K. I. Khalilullah, S. Ota, T. Yasuda, and M. Jindai, "Development of robot navigation method based on single camera vision using deep learning," in 2017 56th annual conference of the society of instrument and control engineers of Japan (SICE), 2017: IEEE, pp. 939-942.
[22] W. Born and C. Lowrance, "Smoother Robot Control from Convolutional Neural Networks Using Fuzzy Logic," in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018: IEEE, pp. 695-700.
[23] (2019年, 6月). ZED [Online]. Available: https://www.stereolabs.com/zed/.
[24] C. Godard, O. Mac Aodha, and G. J. Brostow, "Unsupervised monocular depth estimation with left-right consistency," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270-279.
[25] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[26] (2019年, 6月). Haversine formula [Online]. Available: https://en.wikipedia.org/wiki/Haversine_formula.
[27] (2019年, 6月). Spherical trigonometry [Online]. Available: https://en.wikipedia.org/wiki/Spherical_trigonometry.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔