(3.230.76.48) 您好!臺灣時間:2021/04/15 01:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:楊辰修
研究生(外文):Chen-Hsiu Yang
論文名稱:應用機器學習辨識可供抓枝機器人 抓握之凸起物場景
論文名稱(外文):Application of Machine Learning in Wall Protrusion Recognition for Brachiation Robot
指導教授:林紀穎林紀穎引用關係
指導教授(外文):Chi-Ying Lin
口試委員:林顯易劉孟昆
口試日期:2019-07-29
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:機械工程系
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:100
中文關鍵詞:影像辨識電腦視覺機器學習攀爬機器人
外文關鍵詞:pattern recognitionmachine learningcomputer visionclimbing robot
相關次數:
  • 被引用被引用:0
  • 點閱點閱:34
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
攀爬型機器人為取代人類執行高危險性的工作,像是牆面或是窗戶的清潔、外牆結構檢測、管路維修等,自主性也成為影響機器人價值的關鍵因素。自主性攀爬機器人須具備感測系統來檢測環境,亦即視覺感測器。雖然影像處理技術在電腦視覺與影像辨識等應用上獲得非常大的進步與成果,但由於住宅外牆凸出物與環境上的複雜度,以一般影像處理方法找出照片中的物體及其位置仍有其困難度。本研究針對全自主性攀爬機器人利用神經網路開發一套視覺系統,對於冷氣、水泥屋簷與招牌等三種常出現於住宅外牆且可供攀爬機器人抓握之凸出物進行辨識與定位。由於攀爬型機器人由於無法背負過重的電腦進行神經網路運算,故我們導入了 IOT 與雲端運算的概念,將照片經由無線傳輸由機器人傳送至伺服器進行神經網路的運算。本研究使用快速物件檢測神經網路(Fast Region Convolutional Neural Network, Fast R-CNN)架構實現攀爬場景辨識系統,並結合立體相機裝置估算物體與機器人間的實際距離。最後透過實際在室外住宅外牆實驗,測試在不同光源下的辨識結果;並且模擬攀爬機器人在不同視角下的測試結果,證實了本視覺系統的可行性。未來此系統將可結合於凸桿攀爬型機器人上,以作為路徑規劃實現的基礎。
Climbing robots are built to perform high-risk work to replace humans, such as
wall or window cleaning, exterior wall structure inspection, pipeline maintenance, etc. Autonomous climbing robots must have a sensing part to detect the environmental information, such as using a vision sensor. In the field of computer vision, the image processing technology has achieved great progress and achievements in applications such as pattern recognition. However, due to the complexity of the objects on the exterior wall and the environment, it is impossible to identify the objects in the photos and their positions through simple image processing. Thus, this study uses neural networks to develop a visual system for autonomous climbing robots to identify air conditioning, concrete eaves and sign boards. The neural networks are applied to identify and locate three kinds of protruding objects that often appear on the exterior wall of the house, which can be grabbed using the climbing robots designed in our lab.
Since the computer for neural networks calculation is too heavy for climbing robots to carry, to deal with this problem this study adopts the concept of (Internet of Things) IoT and cloud computing, i.e., transmitting the photos to the server via wireless network and executing the neural networks on the server. This study uses a Fast Region Convolutional Neural Network (Fast R-CNN) architecture to implement the climbing environmental recognition system combined with a depth camera to estimate the distance between object and robot. The experiments under different light sources at the actual outdoor wall environments and the cases adjusting the angle of view from the robot side are conducted to justify the feasibility of the developed visual system, which could be applied to the ledge-climbing type robots as the basis of path planning in the future.
摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VI
表目錄 IX
第 1 章 、緒論 1
1.1. 前言 1
1.2. 文獻回顧與研究動機 9
1.3. 本文貢獻與架構 10
第 2 章 、物體辨識與測距 12
2.1. 影像處理與雷射測距 12
2.2. 3D立體影像 13
2.1.1. 雙目相機立體影像 14
2.1.2. 深度相機3D影像 17
2.3. 平面分割 19
2.4. 卷積神經網路 23
2.5. 物件檢測與卷積神經網路 26
2.5.1. 經典物件檢測網路架構 27
2.5.2. CNN轉為Fast R-CNN架構 29
2.5.3. 轉移學習 31
2.6. 神經網路精確度 34
2.6.1. 準確率與召回率 34
2.6.2. 邊界框重疊率 35
2.6.3. 平均精度 36
第 3 章 、系統與實驗流程 41
4.1. 系統架構 41
3.1.1. Socket程式介面 43
3.1.2. 軟體系統流程 46
4.2. 資料收集 47
4.2.1. 資料擴增 48
4.2.2. 資料標記 49
4.3. 神經網路訓練 50
4.4. 系統整合 52
4.4.1. 機器人端 53
4.4.2. 無線傳輸 54
4.4.3. 伺服器端 55
4.5. 實驗方法 56
第 4 章 、實驗結果與討論 59
4.1. 物體檢測精確度 59
4.2. 距離檢測結果 62
4.3. 無線網路傳輸 65
4.4. 實驗結果 68
4.4.1. 不同光源測試 69
4.4.2. 干擾與雜訊 73
4.4.3. 偏轉角度測試 74
4.5. 實驗結果討論 79
第 5 章 、結論與未來展望 81
References 83
[1] 楊宗翰, “橫向飛躍抓枝機器人之設計改良與運動控制研究,” 國立台灣科技大學機械工程系研究所. 2018.
[2] “電腦視覺,” https://zh.wikipedia.org/wiki/電腦視覺.
[3] “機器視覺,” https://zh.wikipedia.org/wiki/機器視覺.
[4] “基於內容的圖像檢索,” https://zh.wikipedia.org/wiki/基於內容的圖像檢索.
[5] “AI 時代,” https://www.fsitc.com.tw/act/201901_ai/topic-7.html.
[6] Daniel Aguilera-Castro, Manuel Neira-Cárcamo, Cristhian Aguilera-Carrasco, Luis Vera-Quiroga, “Stairs recognition using stereo vision-based algorithm in NAO robot,” CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Pucon, Chile. 18-20 Oct. 2017.
[7] Weinan Chen, Shichao Gu, Yisheng Guan, Hong Zhang, Guanfeng Liu, Hui Tang, “A multi-layered path planning algorithm for truss climbing with a biped robot,” IEEE International Conference on Information and Automation (ICIA), Ningbo, China. 1-3 Aug. 2016.
[8] Hao Tang, Zhigang Zhu, Jizhong Xiao, “Stereovision-based 3D planar surface estimation for wall-climbing robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA. 10-15 Oct. 2009.
[9] S.P. Linder, E. Wei, A. Clay, “Robotic Rock Climbing using Computer Vision and Force Feedback,” IEEE International Conference on Robotics and Automation, Barcelona, Spain, Spain. 18-22 April 2005.
[10] Nguyen Anh Dung, Akira Shimada, “A path-planning algorithm for humanoid climbing robot using Kinect sensor,” Proceedings of the SICE Annual Conference (SICE), Sapporo, Japan. 9-12 Sept. 2014.
[11] Yeong-Hwa Chang, Ping-Lun Chung, Hung-Wei Lin, “Deep learning for object identification in ROS-based mobile robots,” IEEE International Conference on Applied System Invention (ICASI), Chiba, Japan. 13-17 April.
[12] Yan Zhuang, Xueqiu Lin, Huosheng Hu, Ge Guo, “Using Scale Coordination and Semantic Information for Robust 3-D Object Recognition by a Service Robot,” IEEE Sensors Journal, vol. 15, no. 1, pp. 37–47, 2014.
[13] Xiaofeng Ren, Dieter Fox, Kurt Konolige, “Change Their Perception: RGB-D for 3-D Modeling and Recognition,” IEEE Robotics & Automation Magazine, vol. 20, no. 4, pp. 49–59, 2013.
[14] Chih-Wei Hsiao, Yi-Hsing Chien, Wei-Yen Wang, I-Hsum Li, Ming-Chang Chen, Shun-Feng Su, “Wall following and continuously stair climbing systems for a tracked robot,” IEEE 12th International Conference on Networking, Sensing and Control, Taipei, Taiwan. 9-11 April 2015.
[15] Takafumi Kijima, Naoki Sekiguchi, Hun-ok Lim, “Study on object recognition by active stereo camera for clean-up robot,” 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, South Korea. 18-21 Oct., 2017.
[16] Hernando Leon-Rodriguez, Tariq Sattar, Jong-Oh Park, “Wireless climbing robots for industrial inspection,” IEEE ISR, Seoul, South Korea. 24-26 Oct. 2013.
[17] Aravind Sekhar R et al., “A Novel Design Technique to Develop a Low Cost and Highly Stable Wall Climbing Robot,” 4th International Conference on Intelligent Systems, Modelling and Simulation, Bangkok, Thailand. 29-31 Jan. 2013.
[18] Rakesh Agarwal et al., “Intelligent Climber: A Wireless Wall-Climbing Robot Utilizing Vacuum Suction and Sand Paper,” Texas Instruments India Educators' Conference, Bangalore, India. 4-6 April 2013.
[19] Rajesh Kannan Megalingam et.al., “Study and development of Android controlled wireless pole climbing robot,” IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), Dhaka, Bangladesh. 19-20 Dec. 2015.
[20] Nasir Ali, Usman Zafar, Sheeraz Ahmad, Jamshed Iqbal, Zeashan Hameed Khan, “LizBOT design and prototyping of a wireless controlled wall climbing surveillance robot,” International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan. 19-21 April 2017.
[21] Z. S. Pershina, “Application of algorithms for object recognition based on deep convolutional neural networks for visual navigation of a mobile robot,” 25th International Conference on Integrated Navigation Systems (ICINS), Saint Petersburg. May, 2018.
[22] Chungkeun Lee, H. Jin Kim, Kyeong Won Oh, “Comparison of faster R-CNN models for object detection,” 16th International Conference on Control, Automation and Systems (ICCAS), Gyeongju, South Korea. 16-19 Oct. 2016.
[23] Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, “Synthetic Data for Text Localisation in Natural Images,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA. 27-30 June 2016.
[24] Lei Quan, Dong Pei, Binbin Wang, Wenbin Ruan, “Research on Human Target Recognition Algorithm of Home Service Robot Based on Fast-RCNN,” 10th International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China. 9-10 Oct. 2017.
[25] “3D Sensing Camera Model Camera Calibration,” https://slideplayer.com/slide/8567809.
[26] “Passive stereo vision with deep learning,” https://www.slideshare.net/yuhuang/passive-stereo-vision-with-deep-learning.
[27] “Intel RealSense F200規格,” http://image-sensors-world.blogspot.com/2015/01/intel-releases-more-details-on-its-f200.html.
[28] Radu Bogdan Rusu; Nico Blodow, Zoltan Marton, Alina Soos, Michael Beetz, “Towards 3D object maps for autonomous household robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA. 29 Oct.-2 Nov. 2007.
[29] A.Z. P.H.S.Torr, “MLESAC: A New Robust Estimator with Application to Estimating Image Geometry,” Computer Vision and Image Understanding, vol. 78, no. 1, pp. 138–156, 2000.
[30] C. Szegedy et al, “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA. June 7-12, 2015, pp. 1-9.
[31] “卷積神經網路,” https://zh.wikipedia.org/wiki/卷積神經網路.
[32] “初探卷積神經網路,” https://chtseng.wordpress.com/2017/09/12/初探卷積神經網路.
[33] Jasper R. R. Uijlings, Koen E. A. van de Sande, Theo Gevers, Arnold W. M. Smeulders, “Selective Search for Object Recognition,” International Journal of Computer Vision, vol. 104, pp. 154–171, 2013.
[34] Koen E. A. van de Sande, Jasper R. R. Uijlings, Theo Gevers, Arnold W. M. Smeulders, “Segmentation as selective search for object recognition,” International Conference on Computer Vision, Barcelona, Spain. 12 January, 2012.
[35] Girshick, R., J. Donahue, T. Darrell, and J. Malik., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.,” IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA. 23-28 June 2014.
[36] Zitnick, C. Lawrence, and P. Dollar., “Edge boxes: Locating object proposals from edges.,” Computer Vision-ECCV., Springer International Publishing, pp. 391–405, 2014.
[37] “Faster R-CNN,” https://ww2.mathworks.cn/help/vision/ug/faster-r-cnn-basics.html#mw_68ca6afd-f4b0-4c54-8df2-38d40e9e45ad.
[38] Ross Girshick, “Fast r-cnn,” IEEE International Conference on Computer Vision (ICCV), Santiago, Chile. 7-13 Dec. 2015.
[39] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2016.
[40] Alfredo Canziani, Adam Paszke, Eugenio Culurciello, “An Analysis of Deep Neural Network Models for An Analysis of Deep Neural Network Models for An Analysis of Deep Neural Network Models for Practical Applications,” ICLR conference submission.
[41] By Walber, CC BY-SA 4.0, “Precision and recall,” https://commons.wikimedia.org/w/index.php?curid=36926283.
[42] “MATLAB,” https://zh.wikipedia.org/wiki/MATLAB.
[43] “OpenCV,” https://zh.wikipedia.org/wiki/OpenCV.
[44] “Socket API簡介,” https://zh.wikipedia.org/wiki/Winsock.
[45] “socket架構圖,” https://slidesplayer.com/slide/11723337/.
[46] 胡婷鈞, “可於壁面凸塊進行橫向攀爬之抓枝機器人設計與運動控制策略研究,” 國立台灣科技大學機械工程系研究所. 2018.
[47] “Practices and pitfalls in inferring neural representations,” https://www.researchgate.net/figure/a-the-RGB-color-space-black-arrows-show-the-three-main-color-dimensions-whose-values_fig2_323952018
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔