跳到主要內容

臺灣博碩士論文加值系統

(44.200.168.16) 您好!臺灣時間:2023/03/21 16:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:李建翰
研究生(外文):Jian-Han Li
論文名稱:人工智慧影像辨識技術於臺灣香檬採收之應用
論文名稱(外文):Application of artificial intelligence image recognition technology in citrus taiwanica harvesting
指導教授:侯帝光
指導教授(外文):Ti-Kuang Hou
口試委員:任貽明沈銘原潘國興侯帝光
口試日期:2021-06-17
學位類別:碩士
校院名稱:國立聯合大學
系所名稱:機械工程學系碩士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:61
中文關鍵詞:人工智慧卷積神經網路物聯網(IoT)臺灣香檬機械手臂採收自動化
外文關鍵詞:artificial intelligence(AI)convolutional neural networkInternet of Things (IoT)citrus taiwanicarobotic armautomatic harvesting
相關次數:
  • 被引用被引用:0
  • 點閱點閱:144
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本文藉由YOLOv3卷積神經網路模型、機械手臂及其控制單元、夾爪部件等,在實驗場域中實現臺灣香檬採收自動化使用真實場景的1620張影像做影像數據集訓練,185張影像做影像驗證集驗證所有影像皆含晴天、陰天、逆光處及陰影處,實驗結果得出神經網路模型之召回率(Recall)為0.723,精準度(Precision)為1.000,F1-分數(F1-score)為0.839。神經網路模型配合攝影機之即時影像輸入與四關節機械手臂夾取導入實驗場域進行採收自動化後,模型驗證平均成功率為91.85%,自動化採收夾取成功率為93.34%,單次自動化採收耗時為20.90秒,單日最高可收穫量估計為78.48公斤(130.80台斤),藉由物聯網(IoT)概念,將自動化採收過程中的及時影像數據與採收數據上傳至網際網路,以便遠端監控未來可搭配物流車載系統於田野間完成自動化採收。本文為農業採收時的外觀辨識不確定性與臺灣香檬採收勞動力短缺的問題,提出了一種簡單可行的技術,對臺灣香檬產業做出貢獻。
This paper uses the YOLOv3 convolutional neural network model,robotic arm and its control unit, gripper components, etc., to realize the citrus taiwanica automatic harvesting in the laboratory. The study used 1620 images of real scenes for image data set training, 185 images for image verification. These images contain the cases of sunny, cloudy, backlit and shadowed areas. The experimental results show that the recall rate of the neural network model is 0.723, the precision is 1.000, and the F1-score is 0.839. The neural network model is embed in the automatic harvesting system which is combined with the real-time image input of the camera and the four-joint robotic arm. The system was installed in the experimental setup. The average success rate of model verification is 91.85%, the success rate of automatic harvesting is 93.34%, a single automation harvesting time is 20.90 seconds, and the maximum harvested quantity in single day is estimated to be 78.48 kg. This article proposes a simple and feasible technology to make a contribution to the citrus taiwanica industry for the harvesting in the citrus taiwanica harvesting.
摘要 III
Abstract IV
致謝 V
圖目錄 IX
表目錄 XII
第1章 前言 1
1.1研究背景 1
1.2研究動機 2
1.3研究目的 3
第2章 文獻回顧 4
2.1 農業採收機器 4
2.2 卷積神經網路CNN 8
2.2.1 卷積神經網路組成 8
2.2.1.1 卷積層(Convolution layer) 8
2.2.1.2 池化層(Pooling layer) 9
2.2.1.3 全連接層(Fully connected layer) 10
2.2.2 VGG-Net (Very Deep Convolutional Networks for Large-Scale Image
Recognition) 10
2.2.3 GoogleNet (Going Deeper with Convolutions) 11
2.2.4 R-CNN (Region-based Convolutional Neural Network) 12
2.2.5 Faster R-CNN (Faster Region-based Convolutional Neural Network) 12
2.2.6 Mask R-CNN (Mask Region-based Convolutional Neural Network) 13
2.2.7 SSD (Single Shot MultiBox Detector) 14
2.2.8 YOLO (You Only Look Once) 15
2.3 機器控制 16
2.4 物聯網(IoT)概念 18
第3章 研究原理與方法 21
3.1 YOLOv3神經網路模型 22
3.1.1 Darknet-53網路結構 23
3.1.2目標檢測 25
3.2 影像收集與數據集 26
3.3實驗設備 28
3.3.1 影像辨識設備 28
3.3.2 採收機構 29
3.3.3 採收機構控制器 32
3.4 YOLOv3神經網路模型訓練與驗證 32
3.4.1評估指標 32
3.4.2訓練與驗證結果 33
3.5 即時影像輸入 37
3.6 採收方法及控制 39
3.7 實驗場域與環境 44
3.8 有線傳輸與無線傳輸 47
3.8.1方法比較 47
3.8.2無線傳輸改良與應用 48
第4章 結果與討論 51
4.1 實驗場域臺灣香檬模型辨識結果驗證 51
4.2 機械手臂夾取結果驗證 53
4.3 結果討論 53
4.4 建議與改良 55
第5章 結論 56
參考文獻 57

[1] 劉業經、呂福原、歐辰雄。1994。台灣樹木誌。國立中興大學農學院叢書。
[2] Jinshuang Ma et al, 2006, A revision of Phellodendron (Rutaceae), Edinburgh Journal of Botany, 63(2-3):131-151.
[3] 何東輯。2007。台灣產芸香科植物之訂正。特有生物研究 9(2):29-52.。
[4] 許再文。陳祈男。何東輯。2015。台灣產柑橘屬(芸香科)植物。台灣生物多樣性研究(TW J. of Biodivers.) 17(1):67-73.
[5] 曹乃文。賴建興。曾彥學。王升陽。2018。4種台灣芸香科柑橘屬植物花朵之揮發性成份比較分析。中華林學季刊51券1期。82-97頁。
[6] 初島住彥。1971。Flora of the Ryukyus (琉球植物誌)。沖繩生物教育研究會。350-355頁。
[7] 李開復。王詠剛。2017。人工智慧來了。天下文化。
[8] 中國人工智慧產業發展聯盟組。2018。人工智慧浪潮:科技改變生活的100個前沿AI應用。人民郵電出版社。
[9] 日本Newton Press。2020。賴貞秀,曾文媛譯。全面了解人工智慧:從基本機制到應用例,以及未來發展。人人出版。
[10] 裴有恆。林祐祺。2017。IoT無限商機-產業概論×實務應用。碁峰。
[11] 顏長川。2020。5G時代大未來:利用大數據打造智慧生活與競爭優勢。時報出版。
[12] 行政院主計總處。2017。104年農林漁牧業普查報告第3卷 農牧業報告(上冊)。
[13] Y.LeCun et al, 1989, Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 541-551.
[14] Ross Girshick et al, 2013, Rich feature hierarchies for accurate object detection and semantic segmentation, Computer Vision and Pattern Recognition, arXiv:1311.2524.
[15] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, 2015, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, Computer Vision and Pattern Recognition, arXiv:1506.01497.
[16] Kaiming He et al, 2017, Mask R-CNN, Computer Vision and Pattern Recognition, arXiv:1703.06870.
[17] Wei Liu et al, 2015, SSD: Single Shot MultiBox Detector, Computer Vision and Pattern Recognition, arXiv:1515.02325.
[18] Joseph Redmon et al, 2015, You Only Look Once: Unified, Real-Time Object Detection, Computer Vision and Pattern Recognition, arXiv:1506.02640.
[19] Y.Edan et al, 2000, Robotic melon harvesting, IEEE Transactions on Robotics and Automation, Volume: 16, Issue: 6, 831-835.
[20] Pen-Yuan Yang, 2011, Development of the End-Effector of Picking Robot for Greenhouse Grown Tomatoes.
[21] Yi-An Chen, 2014, Combine Machine Vision Systems with Adaptive Neuro-Fuzzy Inference System to Classify the Maturity and Locate the Stem Position of the Strawberry.
[22] Shao-Fang Hsu, 2016, Development of an Automatic Lettuce Harvest Mechanism for Plant Factory.
[23] Rong-Chih Chiu, 2016, Integrated Study of Picking Robot for Strawberry.
[24] Li-Wen Chung, 2017, Integratied Study of a Picking for Greenhouse Grown.
[25] Yi Wang et al, 2019, End-effector with a bite mode for harvesting citrus fruit in random stalk orientation environment, Computers and Electronics in Agriculture, Volume 157, 454-470.
[26] Ali Roshanianfard, Noboru Noguchi, 2020, Pumpkin harvesting robotic end-effector, Computers and Electronics in Agriculture, Volume 174, 105503.
[27] Liesbet van Herck et al, 2020, Crop design for improved robotic harvesting: A case study of sweet pepper harvesting, Biosystems Engineering, Volume 192, 294-308.
[28] Boaz Arad et al, 2020, Development of a sweet pepper harvesting robot, Journal of Field Robotics, Volume 37, 1027-1039.
[29] Yann LeCun et al, 1998, Gradient-Based Learning Applied to Document Recognition.
[30] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, 2012, ImageNet Classification with Deep Convolutional Neural Networks.
[31] Karen Simonyan, Andrew Zisserman, 2014, Very Deep Convolutional Networks for Large-Scale Image Recognition, Computer Vision and Pattern Recognition, arXiv:1409.1556v6.
[32] Christian Szegedy et al, 2014, Going Deeper with Convolutions, Computer Vision and Pattern Recognition, arXiv:1409.4842.
[33] Ross Girshick, 2015, Fast R-CNN, Computer Vision and Pattern Recognition, arXiv:1504.08083
[34] Makoto Kudo et al, 2000, Multi-arm robot control system for manipulation of flexible materials in sewing operation, Mechatronics, Volume 10, 371-402.
[35] Ahmet Shala et al, 2015, Propulsion Effect Analysis of 3Dof Robot Under Gravity, Procedia Engineering, Volume 100, 206-212.
[36] Adrian-Vasile Duka, 2015, ANFIS Based Solution to the Inverse Kinematics of a 3DOF Planar Manipulator, Procedia Technology, Volume 19, 526-533.
[37] Sangeetha G.R et al, 2018, Implementation of a Stereo vision based system for visual feedback control of Robotic Arm for space manipulations, Procedia Computer Science, Volume 133, 1066-1073.
[38] Sanika Ratnaparkhi et al, 2020, Smart agriculture sensors in IOT: A review, Materials Today: Proceedings.
[39] D.Hepsiba et al, 2021, Automatic pollution sensing and control for vehicles using IoT technology, Materials Today: Proceedings.
[40] Nitin Kothari et al, 2021, Design and development of IoT based cement bag counting system, Materials Today: Proceedings.
[41] Ankita Rana, Ashu Taneja, Nitin Saluja, 2021, Accelerating IoT applications new wave with 5G: A review, Materials Today: Proceedings.
[42] Joseph Redmon, Ali Farhadi, 2018, YOLOv3: An Incremental Improvement, Computer Vision and Pattern Recognition, arXiv:1804.02767.
[43] Joseph Redmon, Ali Farhadi, 2016, YOLO9000: Better, Faster, Stronger, Computer Vision and Pattern Recognition, arXiv:1612.08242.
[44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2015, Deep Residual Learning for Image Recognition, Computer Vision and Pattern Recognition, arXiv:1512.03385.
[45] Georgii Kataev et al, 2020, Method to estimate pedestrian traffic using convolutional neural network, Transportation Research Procedia, Volume 50, 234-241.
[46] Chengjun Chen et al, 2020, Repetitive assembly action recognition based on object detection and pose estimation, Journal of Manufacturing Systems, Volume 55, 325-333.
[47] Kewei Cai et al, 2020, A modified YOLOv3 model for fish detection based on MobileNetv1 as backbone, Aquacultural Engineering, Volume 91, 102117.
[48] Hualin Yang et al, 2021, Computer vision-based high-quality tea automatic plucking robot using Delta parallel manipulator, Computers and Electronics in Agriculture, Volume 181, 105946.
[49] John J. Craig, Introduction to Robotics: Mechanics and Control (3rd Edition) ISBN 978-0201543612.

電子全文 電子全文(網際網路公開日期:20260712)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊