跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.88) 您好!臺灣時間:2026/02/15 10:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:劉紘均
研究生(外文):LIU, HUNG-CHUN
論文名稱:基於深度學習之輔助取物系統: 夾取位置及姿態生成
論文名稱(外文):Deep-Learning-Based Assistant Fetching System: Grasp Position and Pose Generation
指導教授:鄭穎仁
指導教授(外文):CHENG, YING-JEN
口試委員:蔡舜宏詹景裕陳翔傑鄭穎仁
口試委員(外文):TSAI, SHUN-HUNGJAN, GENE-EUCHEN, HSIANG-CHIEHCHENG, YING-JEN
口試日期:2019-07-19
學位類別:碩士
校院名稱:國立臺北大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:45
中文關鍵詞:深度學習物件辨識抓取點預測機械手臂
外文關鍵詞:Deep LearningObject DetectionDetecting Robotic GraspsRobotic Arm
相關次數:
  • 被引用被引用:0
  • 點閱點閱:803
  • 評分評分:
  • 下載下載:117
  • 收藏至我的研究室書目清單書目收藏:0
隨著全球生活水準和醫療技術的進步人口老化的浪潮襲捲全球,無論是已開發國家或開發中國家均面臨「高齡化社會」的嚴峻挑戰。因此,居家照護機器人正是智慧型機器人發展的一個重要領域,其中又以行動輔助機器人作為機器人發展領域中極為重要的一環。
為此,本研究希望設計一基於深度學習之輔助取物控制系統,並將其安裝在可遠端遙控的移動式機器人之上,讓行動不便的老人或無法行走之殘疾傷友,可以藉由智慧型移動裝置,幫助他們拿取遠處之物品。此輔助取物控制系統中心為一臺高效能筆記型電腦,連接Kinect感測器以獲得彩色及深度影像資訊,藉由YOLO物件辨識深度神經網路,框選欲夾取之物品,並透過夾取點預測深度神經網路找出該物品之最佳夾取點。在此,本研究提出兩個分割彩度及深度影像的方法對影像前處理,使得輸入影像符合夾取點預測深度神經網路之需求。接著,本研究計算夾取點中心的法向量,使得機械手臂能夠以正確的姿態抓取物品。最後,資訊傳送至多軸機械手臂,使其可以抓取指定物品。對一般的使用者而言,遙控多軸機械手臂抓取物品並不是一件容易上手的事情。因此,本研究將設計一個簡單的使用者介面,讓使用者只要選取畫面中想要取得之物品,輔助取物機器之控制系統中心,就會利用立體視覺感測器回授之3D影像資訊,控制機械手臂,並透過深度學習訓練技術,自動抓取使用者所點選之物品。

With the rapid advancement of science and technology, robots have gradually replaced many human jobs. Due to the world trend of ageing population and the goal to help physically challenged people, this paper intends to design a deep-learning-based assistant fetching system. With the assistant fetching system on a remote control mobile robot, the elder and the disabled can remote the robot through their intelligent devices (intelligent cell phone or tablet) to fetch things that they cannot reach. The system is composed of a high performance laptop as the control center, the Kinect V2 camera for obtaining color/depth images and a robotic arm for fetching the object. In the beginning, we apply an object detection network YOLO trained by our dataset to select the object from the color image. After that, a robotic grasps detection deep neural network is utilized to detect the optimal robotic grasp with the preprocessed color and depth images. Then, the plane normal vector of the grasp position is calculated such that the robotic arm can fetch the target successfully. Finally, experiment results are given to show the practicality of the proposed deep-learning-based assistant fetching system.
誌謝 I
中文摘要 II
ABSTRACT III
OUTLINE IV
LIST OF FIGURE VI
LIST OF TABLE VII
CHAPTER 1 INTRODUCTION 1
1-1 BACKGROUND AND MOTIVATION 1
1-2 MAIN TASKS 7
1-3 ORGANIZATION 8
CHAPTER 2 PRELIMINARY 9
2-1 KINECT SENSOR 9
2-2 YOLO 11
2-3 FUZZY C-MEANS CLUSTERING 14
2-4 DEEP LEARNING FOR DETECTING ROBOTIC GRASPS 16
CHAPTER 3 DEEP-LEARNING-BASED ASSISTANT FETCHING SYSTEM 19
3-1 FRAME CAPTURING AND OBJECT DETECTION BY YOLO 20
3-2 DETECTING ROBOTIC GRASP BY DEEP LEARNING NETWORK 21
3-2-1 DATA PREPROCESSING 21
3-2-2 RESCORING 25
3-3 CALCULATING PLANE NORMAL VECTOR OF THE GRASP POSITION 28
CHAPTER 4 EXPERIMENT RESULTS 31
4-1 EXPERIMENTAL CIRCUMSTANCE 31
4-2 EXPERIMENT 32
4-3 OTHER GRASPS 35
CHAPTER 5 CONCLUSION AND FUTURE WORK 40
5.1 CONCLUSION 40
5.2 FUTURE WORK 40
REFERENCES 42


[1] ROBOTICS MARKET - GROWTH, TRENDS, AND FORECAST (2019 - 2024).
(available from: https://www.mordorintelligence.com/industry-reports/robotics-market)
[2] G. Wilson, C. Pereyda, N. Raghunath, G. de la Cruz, S. Goel, S. Nesaei, B. Minor, M. Schmitter-Edgecombe, M. Taylor, and D. Cook, “Robot-enabled Support of Daily Activities in Smart Home Environments,” Cognitive Systems Research, vol. 54, pp. 258–272, 2019.
[3] The strong robot with the gentle touch
(available from: http://www.riken.jp/en/pr/press/2015/20150223_2/)
[4] Study Affirms Benefits of Robotics in Care Services
(available from: https://aethon.com/study-affirms-benefits-of-robotics-in-care-services/)
[5] M. E. Moran, “Evolution of Robotic Arms,” Journal of Robotic Surgery, vol. 1, no. 2, pp. 103–111, Jun. 2007.
[6] K. Shiruru, “An Introduction to Artificial Neural Network,” International Journal of Advance Research and Innovative Ideas in Education, vol. 1, no. 5, pp. 27-30, Sep. 2016.
[7] X. Glorot, A. Bordes, and Y. Bengio, “Deep Sparse Rectifier Neural Networks,” in Proceedings of the Artificial Intelligence and Statistics Conference (AISTATS 2011), Fort Lauderdale, USA.
[8] S. Hochreiter, “The Vanishing Gradient Problem during Learning Recurrent Neural Nets and Problem Solutions,” International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, vol. 6, no. 2, pp. 107–116, Apr. 1998.
[9] J. A. K. Suykens, and J. Vandewalle, “Least Squares Support Vector Machine Classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293-300, Jun. 1999.
[10] G. E. Hinton, S. Osindero, and Y.W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527-1554, Jul. 2006.
[11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in Proceedings of the Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 2016.
[12] J. Redmon, and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” arXiv:1612.08242 [cs.CV], Dec. 2016.
[13] Joseph Redmon and Ali Farhadi, “YOLOv3: An Incremental Improvement,” arXiv:1804.02767 [cs.CV], Apr. 2018.
[14] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481-2495, Dec. 2017.
[15] M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC 2015), Swansea, UK, 2015.
[16] Y. Sun, X. Wang, and X. Tang, “Hybrid Deep Learning for Face Verification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp. 1997-2009, Oct. 2016.
[17] A. Karpathy and F. F. Lee, “Deep Visual-Semantic Alignments for Generating Image Descriptions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 664-676, Apr. 2017.
[18] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. V. D. Driessche, T. Graepel, and D. Hassabis, “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, Oct. 2017.
[19] E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, Apr. 2017.
[20] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), Columbus, OH, USA, Jun. 2014.
[21] R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV 2015), Washington, DC, USA, Dec. 2015.
[22] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
[23] S. Kumra, and C. Kanan, “Robotic Grasp Detection using Deep Convolutional Neural Networks,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, Sep. 2017.
[24] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, “Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation,” in Proceedings of the 2nd Conference on Robot Learning (CoRL 2018), Zurich, Switzerland, Oct. 2018.
[25] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, Apr. 2015.
[26] H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect range sensing: Structured-light versus Time-of-Flight Kinect”, Computer Vision and Image Understanding, vol.139, no. C, pp. 1-20, Oct. 2015.
[27] Understanding Kinect V2 Joints and Coordinate System
(available from: https://medium.com/@lisajamhoury/understanding-kinect-v2-joints-and-coordinate-system-4f4b90b9df16)
[28] J. C. Bezdeck, R. Ehrlich, and W. Full, “FCM: Fuzzy C-Means Algorithm,” Computers and Geoscience, vol. 10, no. 2-3, pp. 191-203, 1984.
[29] V. Torra, “On the selection of m for Fuzzy c-Means”, in Proceedings of 16th World Congress of the International Fuzzy Systems Association (IFSA 2015) and 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT 2015), Gijón, Asturias, Spain, Jun. 2015.
[30] Kinect for Windows SDK 2.0 (available from: https://github.com/kinect/docs)
[31] TYPES OF CLUSTERING METHODS: OVERVIEW AND QUICK START R CODE.
(available from: https://www.datanovia.com/en/blog/types-of-clustering-methods-overview-and-quick-start-r-code/)
[32] T. Singh, and M. Mahajan, “Performance Comparison of Fuzzy C Means with Respect to Other Clustering Algorithm,” International Journal of Advanced Research in Computer Science and Software Engineering, vol. 4, no. 5, pp. 89-93, May 2014.
[33] S. Lloyd, “Least squares quantization in PCM, “ IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129-137, Mar. 1982.
[34] S. C. Johnson, “Hierarchical Clustering Schemes,” Psychometrika, vol. 32, no. 3, pp. 241–254, Sep. 1967.
[35] YOLO website (available from: https://pjreddie.com/darknet/yolo/)
[36] S. Bansal, and D. Aggarwal, “Color image segmentation using CIELAB color space using ant colony optimization,” International Journal of Computer Applications, vol. 29, no. 9, pp. 28-34, Sep. 2011.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊