跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.91) 您好!臺灣時間:2025/03/16 12:20
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陸宇泰
研究生(外文):LU, YU-TAI
論文名稱:應用於多目標偵測之軟性夾爪與力度控制系統
論文名稱(外文):Soft Gripper and Force Control System for Multi-Target Object Detection
指導教授:沈金鐘
指導教授(外文):SHEN, JING-CHUNG
口試委員:陳建璋沈金鐘李政道
口試委員(外文):CHEN, CHIEN-CHANGSHEN, JING-CHUNGLEE, JHENG-DAO
口試日期:2022-07-12
學位類別:碩士
校院名稱:國立虎尾科技大學
系所名稱:自動化工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:47
中文關鍵詞:物件偵測機器手軟抓取器
外文關鍵詞:Object detectionRobotic armsSoft gripper
相關次數:
  • 被引用被引用:0
  • 點閱點閱:263
  • 評分評分:
  • 下載下載:49
  • 收藏至我的研究室書目清單書目收藏:0
隨著近代機器人技術的發展越來越成熟,智能機械正在改變我們的生活型態,而在智能機器手臂的研究領域中,無人商店與自動物流無疑是一個具有強大發展潛力的領域,無人商店與自動物流除了可以將人員從重複性工作解放出來之外,還具有接近全天候的工作時間,而且在現今疫情嚴重的時空背景之下,更突顯了它在防疫工作上的應用潛力,而為了向全自動化進一步發展,勢必需要加入機器手臂來對商品進行輸送,因此辨識商品及穩定的拿取商品就是往全自動無人化的關鍵性技術之一,而具有高度自適應性的軟抓取器和控制器是解決抓取不規則或非剛性物體問題的其中一種方案。有鑒於此,本研究將以探討如何運用人工智慧中的物件偵測技術並結合電流控制來實現軟抓取器對不同物體的力度控制及辨識為主要研究方向。
本文的研究目標主要採用常見生活用品,透過收集測試物品的影像數據,可以針對測試物品形狀特徵的不同,將其分類成七個主要的形狀類別,並使用電流感測器來分析軟抓取器抓取不同物體時,所需要的電流大小,從而進一步控制對物體施加的力度,以及物體能夠承受的最大力度,再結合以EfficientDet做為基底建立的物件偵測模型來對物體進行定位和辨識。
最後再設計一個人機介面將即時影像擷取、控制介面,物品的資訊和深度學習模型整合在一起。在最終結果中,本研究建立之夾爪系統在12 FPS的處理速度下,對物體的實際辨識準確率在默認情況(IoU=0.5)下達到百分之九十四點七,並且成功的穩定抓取物體。

With rapid advances in machine learning, advancements in robot technologies have become more and more intelligent and have been changing our lives. In the research field, unmanned stores are potentially an answer to the new style of retail business because of not only the nonexistent employees and long-term work time but also the safety of contact-free shopping can avoid the infection of disease. To further develop full-automatic unmanned stores, identifying products and how to pick products with robotic arms are the entail technologies, so robotic arms with high adaptability grippers are one of the solutions to solve the problem of irregular or non-rigid objects (which means goods).
In this thesis, we will be focused on how to combine the current control and multi-target detection to achieve pick goods with force control and recognition. The image data of goods we used is from common daily necessities, it can be classified by shape into 7 different types.
We use EfficientDet as a based detection model to locate and recognized objects, then use the current sensor to measure the current variation and analysis the force we exerted, making us can pick the product and hold it steadily.
Finally, we designed a human interface to place the control panel and information of objects, which is captured by our camera and recognized in real-time. In the final result, we developed a gripper system based on EfficientDet with current force control. In particular, our mAp reached 94.7, calculate by the COCO test-dev Evaluation indicator at the default setting (IoU = 0.5) in 12 FPS.

中文摘要………………………………………………………… i
英文摘要………………………………………………………… ii
誌謝 ………………………………………………………… iv
目錄 ………………………………………………………… v
表目錄 ………………………………………………………… vii
圖目錄 ………………………………………………………… viii
符號說明………………………………………………………… x
第一章 緒論…………………………………………………… 1
1.1 研究背景與研究動機………………………………… 1
1.2 論文大綱……………………………………………… 1
1.3 軟性機器手臂………………………………………… 2
1.4 人工智慧與機器手…………………………………… 2
第二章 軟體架構與人工智慧模型配置……………………… 3
2.1 圖片數據集…………………………………………… 3
2.2 物件偵測……………………………………………… 4
2.3 人工智慧模型的選用………………………………… 5
2.4 EfficientDet………………………………………… 6
2.5 模型參數……………………………………………… 7
第三章 夾爪系統架構與硬體配置…………………………… 10
3.1 夾爪系統架構與實體架構…………………………… 10
3.2 夾爪機構比較與選用………………………………… 11
3.3 直流馬達的選用……………………………………… 13
3.4 電流感測器選用……………………………………… 14
3.5 單晶片控制板………………………………………… 15
3.6 夾爪固定座…………………………………………… 15
3.7 實際機構組合與動作說明…………………………… 16
第四章 實驗結果……………………………………………… 17
4.1 夾爪測試……………………………………………… 17
4.2 夾爪控制……………………………………………… 20
4.3 量測結果……………………………………………… 21
4.4 物件辨識結果………………………………………… 29
第五章 結論與未來發展方向………………………………… 33
5.1 結論…………………………………………………… 33
5.2 未來發展方向………………………………………… 33
參考文獻 ……………………………………………… 34
附錄一 ………………………………………………………… 37
附錄二 ………………………………………………………… 40

[1] E. A. Lee, 2008, "Cyber Physical Systems: Design Challenges", 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), pp. 363-369, May.
[2] L. Lei, Y. Tan, K. Zheng, S. Liu, K. Zhang and X. Shen, 2020, "Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges", IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 1722-1760, thirdquarter.
[3] H. A. Akeel, S. W. Holland, 2000, "Product and Technology Trends for Industrial Robots", Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol. 1, pp. 696-700, April.
[4] S. H. Collins, A. Ruina, 2005, "A Bipedal Walking Robot with Efficient and Human-Like Gait", Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 1983-1988, April.
[5] L. Zollo, S. Roccella, E. Guglielmelli, M. C. Carrozza and P. Dario, 2007, "Biomechatronic Design and Control of an Anthropomorphic Artificial Hand for Prosthetic and Robotic Applications", IEEE/ASME Transactions on Mechatronics, vol. 12, no. 4, pp. 418-429, August.
[6] L-W Tsai, 1999, Robot Analysis, USA:John Wiley & Sons, Inc., Hoboken, NJ.
[7] R. He, Y. He and W. Zhang, 2019, "Grasping Analysis of a Spherical Self-Adaptive Gripper", 2019 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pp. 1-5, July.
[8] M. Cianchetti, T. Ranzani, G. Gerboni, I. D. Falco, C. Laschi, and A. Menciassi, 2013, "STIFF-FLOP Surgical Manipulator: Mechanical Design and Experimental Characterization of the Single Module", 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3576-3581, November.
[9] W. McMahan, B. A. Jones, and I. D. Walker, 2005, "Design and Implementation of a Multi-Section Continuum Robot: Air-Octor", 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2578-2585, August.
[10] F. Iida, C. Laschi, 2011, " Soft Robotics: Challenges and Perspectives", Procedia Computer Science, vol. 7, pp. 99–102.
[11]M. Tan, R. Pang, and Q. V. Le, 2020, "EfficientDet: Scalable and Efficient Object Detection", 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778-10787, June.
[12]J. Redmon, A. Farhadi, 2017, "YOLO9000: Better, Faster, Stronger", 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7263-7271, July.
[13]J. Redmon and A. J. a. e.-p. Farhadi, 2018, "YOLOv3: An Incremental Improvement", Computer Vision and Pattern Recognition (cs.CV), April.
[14]A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. J. a. e.-p. Liao, 2020, "YOLOv4: Optimal Speed and Accuracy of Object Detection", Computer Vision and Pattern Recognition (cs.CV), April.
[15]Ultralytics/yolov5, 2021, Available:https://github.com /ultralytics/yolov5.
[16]M. L. Mekhalfi, C. Nicolò, Y. Bazi, M. M. A. Rahhal, N. A. Alsharif, and E. A. Maghayreh, 2022, "Contrasting YOLOv5, Transformer, and EfficientDet Detectors for Crop Circle Detection in Desert", IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5.
[17]Z. Wu, et al., 2021, "Using YOLOv5 for Garbage Classification", 2021 4th International Conference on Pattern Recognition and Artificial Intelligence (PRAI), pp. 35-38, August.
[18]R. Singh, S. Shetty, G. Patil, and P. J. Bide, 2021, "Helmet Detection Using Detectron2 and EfficientDet", 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 1-5, July.
[19]B. Lantos, P. Klatsmanyi, L. Ludvig and F. Tel, 1997, "Intelligent Control System of a Robot with Dextrous Hand", Proceedings of IEEE International Conference on Intelligent Engineering Systems, pp. 129-134, September.
[20]J. Hughes, et al., 2011, "Soft Manipulators and Grippers: A Review", Frontiers in Robotics and AI, vol.3, pp. 69, November.
[21]T. Laliberte, et al., 2002, "Underactuation in Robotic Grasping Hands", Mach. Intell. Robot. Control, vol. 4, no. 3, pp. 1-11.
[22]W. Crooks, G. Vukasin, M. O’Sullivan, W. Messner, and C. Rogers, 2016, "Fin Ray® Effect Inspired Soft Robotic Gripper: From the RoboSoft Grand Challenge toward Optimization", Frontiers in Robotics and AI, vol. 3, pp. 70, November.
[23]Y. Hao, T. Wang, Z. Ren, Z. Gong, H. Wang, X. Yang, S. Guan and L.Wen, 2017, "Modeling and Experiments of a Soft Robotic Gripper in Amphibious Environments", International Journal of Advanced Robotic Systems, vol. 14, May.
[24]Y. Kim, H. Song and C. Maeng, 2020, "BLT Gripper: An Adaptive Gripper with Active Transition Capability between Precise Pinch and Compliant Grasp", IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5518-5525, October.
[25]網路資料:https://www.einfochips.com/blog/understanding-object-localization-with-deep-learning/
[26]M. Tan, Q. Le, 2019, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks", Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, pp. 6105-6114, January.
[27]Lin, T. Y., et al., 2017, "Feature Pyramid Networks for Object Detection", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117-2125, April.
[28] Lin, T. Y., et al., 2014, "Microsoft coco: Common Objects in Contex", European conference on computer vision, Springer, Cham.
[29] O. Struckmeier, 2019, "LeagueAI: Improving Object Detector Performance and Flexibility through Automatically Generated Training Data and Domain Randomization", arXiv preprint arXiv:1905.13546, May.
[30]D. Hachuel, et al., 2019, "Augmenting Gastrointestinal Health: A Deep Learning Approach to Human Stool Recognition and Characterization in Macroscopic Images", arXiv preprint arXiv:1903.10578, Mar.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top