跳到主要內容

臺灣博碩士論文加值系統

(44.220.247.152) 您好!臺灣時間:2024/09/16 20:58
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:張曜任
研究生(外文):CHANG, YAO-JEN
論文名稱:基於YOLOv8模型之智慧蘋果篩選系統
論文名稱(外文):Intelligent Apple sorting system based on the YOLOv8 model
指導教授:涂世雄涂世雄引用關係
指導教授(外文):Twu, Shih-Hsiung
口試委員:李維平王佳盈涂世雄
口試委員(外文):Lee, Wei-PingWang, Jia-YinTwu, Shih-Hsiung
口試日期:2024-07-19
學位類別:碩士
校院名稱:中原大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:英文
論文頁數:103
中文關鍵詞:人工智慧YOLOv8物聯網自動化蘋果識別
外文關鍵詞:Artificial IntelligenceYOLOv8Internet of ThingsAutomationApple Recognition
相關次數:
  • 被引用被引用:0
  • 點閱點閱:25
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本篇研究提出基於YOLOv8模型之智慧蘋果篩選系統,以蘋果作為我們的主要目標檢測對象,加上影像辨識系統、自動化的設備及感測器偵測系統的結合,設計出能夠準確篩選出蘋果的好壞並正確收集蘋果的系統。本系統也配備用於監控環境狀況的物聯網系統,環境監控系統能夠隨時偵測環境的當前情況,, 並上傳雲端資料庫。

本研究提出的方法主要分為四個部分,第一部分是蘋果篩選系統的硬體設計,讓這套系統能夠完整地將工作流程順利進行,將蘋果送至正確的分類區,第二部分是 YOLOv8 模型的訓練及分析,透過不同種類的訓練數據進行訓練,再針對各種訓練集的訓練做分析以及比較,第三部分是蘋果辨識的相關準確率,並實測進行蘋果分類的成功率,第四部分是溫溼度感測系統和甲烷濃度感測系統的測試與方法,系統會連接雲端資料庫並上傳偵測數值,設計人機介面讓相關人員更方便觀察環境狀況,當數值發生異常,透過Make服務向LINE發送危險警報通知。

本研究提出的方法貢獻如下:
1.蘋果分類的方便性及效率性:
透過自動化的系統,將蘋果分類的工作更有效率,繁瑣的工作更加方便。
2.節省人力成本:
本系統能夠有效地減少人力成本,, 使機器取代人力,, 減少農業人口流失問題所造成的種種問題。
3.智慧農業的推動:
讓農業社會能夠走向智慧化,在未來可以讓智慧農業更好地發展。
4.視覺化的環境監控:
相關人員能夠隨時監控環境狀況的變化,防範各種情況發生。
5.安全性:
環境中的數值發生異常時,, 能及時發出危險警報以提醒相關人員前往處理。
This study proposes an intelligent apple sorting system based on the YOLOv8 model, with apples as the primary target for detection. By integrating an image recognition system, automated equipment, and sensor detection system, we have designed a system capable of accurately sorting apples based on quality and collecting them correctly. The system is also equipped with an Internet of Things (IoT) system for monitoring environmental conditions. The environmental monitoring system can continuously detect the current state of the environment and upload the data to a cloud database.

The method proposed in this study is divided into four main parts. The first part focuses on the hardware design of the apple sorting system, ensuring a smooth workflow and accurate delivery of apples to their respective classification areas. The second part involves training and analyzing the YOLOv8 model, utilizing various types of training data, and comparing the results from different training sets. The third part addresses the accuracy of apple recognition, including practical tests to determine the success rate of apple sorting. The fourth part focuses on testing the methodology of the temperature and humidity sensing system, as well as the methane concentration sensing system. This system connects to a cloud database to upload the detected values
and includes a human-machine interface to facilitate the observation of environmental conditions by relevant personnel. When abnormal values are detected, the system sends danger alert notifications to LINE through the Make service.

The contributions of the method proposed in this study are as follows:
1.Convenience and Efficiency in Apple Sorting:
Through the automated system, apple sorting becomes efficient and the tedious tasks become more convenient.
2.Reduction in Labor Costs :
This system can effectively reduce labor costs by replacing human labor with machines, addressing issues caused by the decline in the agricultural workforce.
3.Promotion of Smart Agriculture:
It enables the agricultural community to move towards smart agriculture, fostering better development of smart agriculture in the future.
4.Visualized Environmental Monitoring:
Relevant personnel can monitor changes in environmental conditions at any time,
preventing various situations from occurring.
5.Safety:
When abnormal values are detected in the environment, the system can promptly issue danger alerts to notify relevant personnel to take action.
中文摘要 I
Abstract III
致謝 V
Contents VI
List of Figures VIII
List of Tables XII

Chapter 1 Introduction 1
1.1 Research Background 1
1.2 Research Motivation and Purposes 3
1.3 Organization of This Thesis 4

Chapter 2 Research Knowledge and Background 5
2.1 Convolutional Neural Network(CNN) 5
2.2 Object Detection Algorithm 10
2.3 Model Evaluation Metrics 19
2.4 Internet of Things(IoT) 23

Chapter 3 Intelligent Apple Sorting System 29
3.1 System Architecture 29
3.2 Conveyor Belt Design 43
3.3 Deep Learning 47
3.4 Environmental Monitoring 53
3.5 ThingSpeak database 54
3.6 Make service 57

Chapter 4 The Experimental Results and Analysis 61
4.1 Apple Sorting System Assembly 61
4.2 Model Training Analysis 62
4.3 Apple Recognition Test 72
4.4 Apple Classification Results 77
4.5 ThingSpeak cloud database 80
4.6 Integrating LINE notifications into the Make service 82

Chapter 5 Conclusions and Future Research 86
5.1 Conclusions 86
5.2 Future Research 86

References 87


List of Figures
Figure 1-1 Taiwan Agricultural Employment Population Statistics Chart[46] 2
Figure 2-1 The diagram illustrates the relationship between primary visual cortex related areas and the layers of a convolutional neural network 6
Figure 2-2 Convolution Operation Diagram 7
Figure 2-3 Extracting Object Boundaries Using a Feature Detector 8
Figure 2-4 Max pooling and Average pooling Schematic Diagram[49] 9
Figure 2-5 Fully Connected Layer Schematic Diagram 9
Figure 2-6 Diagram of Computer Vision Tasks[51] 11
Figure 2-7 One-stage Diagram 12
Figure 2-8 Two-stage Diagram 12
Figure 2-9 Comparison of frame rates among YOLO and other state-of-the-art object detection algorithms[53] 14
Figure 2-10 Diagram of YOLO Operation 15
Figure 2-11 Architecture Diagram of YOLOv1 15
Figure 2-12 Timeline of the YOLO Series 16
Figure 2-13 YOLOv8 Training Methods 16
Figure 2-14 YOLO Series Performance Comparison Chart[56] 17
Figure 2-15 YOLOv8 Architecture Diagram[55] 18
Figure 2-16 Area of Overlap 22
Figure 2-17 Area of Union 22
Figure 2-18 IoT System Diagram 24
Figure 2-19 IoT Architecture Diagram 26
Figure 2-20 ThingSpeak Homepage Screen 27
Figure 2-21 Make Interface Diagram 28
Figure 3-1 System Flowchart 29
Figure 3-2 Raspberry Pi 4 Model B 8GB 31
Figure 3-3 BMduino-UNO 33
Figure 3-4 ESP32 35
Figure 3-5 HC-SR04 36
Figure 3-6 The schematic diagram illustrates the working principle of HC-SR04 36
Figure 3-7 JGB37-520 DC motor L298N motor driver module 37
Figure 3-8 Servo motor paired with C310 HD network camera 37
Figure 3-9 SG90 servo motor paired with sorting rod 38
Figure 3-10 Infrared Sensing Module and Fresh Apple Collection Box 39
Figure 3-11 Infrared Sensing Module and Rotten Apple Collection Box 39
Figure 3-12 LCD Display 40
Figure 3-13 DHT11 41
Figure 3-14 MQ-4 42
Figure 3-15 First Detection Point 43
Figure 3-16 First-Side Recognition 44
Figure 3-17 Second-Side Recognition 45
Figure 3-18 Second Image Recognition Point Flowchart 45
Figure 3-19 Third Classification Point 46
Figure 3-20 The sorting collection area 47
Figure 3-21 Training process diagram for YOLOv8 model 48
Figure 3-22 Dataset amples 48
Figure 3-23 Roboflow webpage 49
Figure 3-24 Upload dataset samples 49
Figure 3-25 Annotation of dataset samples 50
Figure 3-26 Image Augmentation 51
Figure 3-27 Dataset Segmentation 51
Figure 3-28 Exporting the training set format in YOLOv8 51
Figure 3-29 Model Training Process 52
Figure 3-30 Training Results 52
Figure 3-31 DHT11 Sensor Installation 53
Figure 3-32 MQ-4 Sensor Installation 54
Figure 3-33 Create a new channel 55
Figure 3-34 Create a channel name 55
Figure 3-35 WiFi connection 56
Figure 3-36 Http Get 56
Figure 3-37 Data upload 56
Figure 3-38 Chart Configuration 57
Figure 3-39 Create a Scenario 57
Figure 3-40 Trigger Service 58
Figure 3-41 Action Service 58
Figure 3-42 Trigger Service Configuration 58
Figure 3-43 Action Service Configuration 59
Figure 3-44 ThingHTTP Configuration 59
Figure 3-45 React Configuration 60
Figure 3-46 Flowchart of the trigger service 60
Figure 4-1 Front view of the system 61
Figure 4-2 Side view of the system 61
Figure 4-3 Rear view of the system 62
Figure 4-4 240-image training dataset 63
Figure 4-5 480-image training dataset 63
Figure 4-6 1000-image training dataset 63
Figure 4-7 Loss Chart for Model 1 64
Figure 4-8 Loss Chart for Model 2 65
Figure 4-9 Loss Chart for Model 3 65
Figure 4-10 Type of Model 1 66
Figure 4-11 Type of Model 2 66
Figure 4-12 Type of Model 3 66
Figure 4-13 Confusion Matrix for Model 1 67
Figure 4-14 Confusion Matrix for Model 2 68
Figure 4-15 Confusion Matrix for Model 3 68
Figure 4-16 Loss Chart for Model 1 69
Figure 4-17 Loss Chart for Model 2 69
Figure 4-18 Loss Chart for Model 3 70
Figure 4-19 The P-R curve of Model 1 71
Figure 4-20 The P-R curve of Model 2 71
Figure 4-21 The P-R curve of Model 3 71
Figure 4-22 Upright Diagram 1 73
Figure 4-23 Upright Diagram 2 73
Figure 4-24 Upright Diagram 3 73
Figure 4-25 Upright Diagram 4 73
Figure 4-26 Upright Diagram 5 73
Figure 4-27 Upright Diagram 6 73
Figure 4-28 Cutting Diagram 1 74
Figure 4-29 Cutting Diagram 2 74
Figure 4-30 Cutting Diagram 3 74
Figure 4-31 Cutting Diagram 4 74
Figure 4-32 Cutting Diagram 5 74
Figure 4-33 Cutting Diagram 6 74
Figure 4-34 Cutting Diagram 7 75
Figure 4-35 Cutting Diagram 8 75
Figure 4-36 Cutting Diagram 9 75
Figure 4-37 Cutting Diagram 10 75
Figure 4-38 Horizontal Diagram 1 76
Figure 4-39 Horizontal Diagram 2 76
Figure 4-40 Horizontal Diagram 3 76
Figure 4-41 Horizontal Diagram 4 76
Figure 4-42 Horizontal Diagram 5 76
Figure 4-43 Horizontal Diagram 6 76
Figure 4-44 LCD displays "Rotten Apple" 77
Figure 4-45 LCD displays "Fresh Apple" 78
Figure 4-46 LCD displays "No Apple" 78
Figure 4-47 Classification Process Diagram 79
Figure 4-48 Indicator light lights up 80
Figure 4-49 Diagram of the collection box detection process 80
Figure 4-50 Human-machine interface 81
Figure 4-51 Activate the Make service 82
Figure 4-52 The temperature and humidity levels exceed the standards 82
Figure 4-53 Temperature and Humidity warning light 83
Figure 4-54 Temperature Data Transmission 83
Figure 4-55 Humidity Data Transmission 83
Figure 4-56 Temperature and Humidity Alert Notification 84
Figure 4-57 Methane concentration exceeding standards 84
Figure 4-58 Methane concentration warning light 85
Figure 4-59 The red LED is on 85
Figure 4-60 Methane concentration Data Transmission 85
Figure 4-61 Methane concentration Alert Notification 85


List of Tables
Table 2-1 Confusion Matrix 19
Table 2-2 Definitions of TP, TN, FP, FN 20
Table 3-1 Specifications Table for Raspberry Pi 4 Model B 8GB 31
Table 3-2 Comparison Table of BMduino-UNO and Arduino UNO R3 Specifications 34
Table 3-3 Specification Table of ESP32 35
Table 3-4 DHT11 Specification Table 40
Table 3-5 MQ-4 Specification Table 42
Table 4-1 Model Comparison Table 64
Table 4-2 Comparison Table of Model Differences 67
Table 4-3 Comparison Table of Models 72
Table 4-4 Display table 78
[1]楊碧容, "地方農業加工產業行銷策略之研究-以台南市東山區龍眼加工產業為例," 2012.
[2]呂政道, "台灣農民年齡與農耕戶勞動生產力之關係," 國立臺灣大學農業經濟學系學位論文, vol. 2009, pp. 1-91, 2009.
[3]楊智凱, "智慧農業發展現況," 菇類智慧化生產與農場經營管理研討會專刊, 2019.
[4]艾萬金 et al., "智慧農業雲端稻草人監控系統," 2023.
[5]S. Namani and B. Gonen, "Smart agriculture based on IoT and cloud computing," in 2020 3rd International Conference on Information and Computer Technologies (ICICT), 2020: IEEE, pp. 553-556.
[6]W.-M. Cheng et al., "A real and novel smart agriculture implementation with IoT technology," in 2021 9th International Conference on Orange Technology (ICOT), 2021: IEEE, pp. 1-4.
[7]G. K. Shyam and I. Chandrakar, "A Novel Approach to Edge-Fog-Cloud Based Smart Agriculture," in 2023 International Conference on New Frontiers in Communication, Automation, Management and Security (ICCAMS), 2023, vol. 1: IEEE, pp. 1-5.
[8]Z. A. Haq, Z. A. Jaffery, and S. Mehfuz, "A Novel Framework for Smart Agriculture using Internet of Things and Enabling Technologies," in 2022 International Conference for Advancement in Technology (ICONAT), 2022: IEEE, pp. 1-6.
[9]S. K. Swarnkar, L. Dewangan, O. Dewangan, T. M. Prajapati, and F. Rabbi, "AI-enabled Crop Health Monitoring and Nutrient Management in Smart Agriculture," in 2023 6th International Conference on Contemporary Computing and Informatics (IC3I), 2023, vol. 6: IEEE, pp. 2679-2683.
[10]G. Saxena, C. Sahu, A. Joshi, and S. P. Mohanty, "Food-Care: An Optoelectronic Device for Detection of Fertilizer Contamination in Fruits and Vegetables in Smart Agriculture Framework," in 2022 IEEE International Symposium on Smart Electronic Systems (iSES), 2022: IEEE, pp. 451-452.
[11]G. Dinesh, A. K. Gupta, M. Nagaseshireddy, P. D. Prasanna, M. S. Varshini, and K. Gowtham, "LoRa-Powered Smart Agriculture System for Monitoring and Controlling," in 2024 IEEE Wireless Antenna and Microwave Symposium (WAMS), 2024: IEEE, pp. 1-6.
[12]S. Lin and X. Qi, "Development of Intelligent Agricultural Automation Based on Computer Vision," in 2023 International Conference on Integrated Intelligence and Communication Systems (ICIICS), 2023: IEEE, pp. 1-6.
[13]Z. Li, X. Bai, C. He, and P. Jiang, "Apple Detection and Yield Estimation Based on YOLOv5," in 2024 7th International Conference on Advanced Algorithms and Control Engineering (ICAACE), 2024: IEEE, pp. 754-758.
[14]H. Zhong and S. Hu, "Target Detection Method of Apple Harvesting Robot Based on Improved YOLO v5," in 2023 35th Chinese Control and Decision Conference (CCDC), 2023: IEEE, pp. 431-435.
[15]M. P. Mathew and T. Y. Mahesh, "Determining the region of apple leaf affected by disease using YOLO V3," in 2021 International conference on communication, control and information sciences (ICCISc), 2021, vol. 1: IEEE, pp. 1-4.
[16]K. Xiong, Q. Li, Y. Meng, and Q. Li, "A Study on Weed Detection Based on Improved Yolo v5," in 2023 4th International Conference on Information Science and Education (ICISE-IE), 2023: IEEE, pp. 1-4.
[17]S. Kumari, A. Gautam, S. Basak, and N. Saxena, "YOLOv8 Based Deep Learning Method for Potholes Detection," in 2023 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI), 2023: IEEE, pp. 1-6.
[18]T.-H. Wu, T.-W. Wang, and Y.-Q. Liu, "Real-time vehicle and distance detection based on improved yolo v5 network," in 2021 3rd World Symposium on Artificial Intelligence (WSAI), 2021: IEEE, pp. 24-28.
[19]A. K. Aziz, M. D. Maulana, R. F. Adawiyah, R. F. Firdaus, L. Novamizanti, and F. Ramdhon, "Comparative Analysis of YOLOv8 Models in Skipjack Fish Quality Assessment System," in 2023 3rd International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA), 2023: IEEE, pp. 237-242.
[20]Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, "A survey of convolutional neural networks: analysis, applications, and prospects," IEEE transactions on neural networks and learning systems, vol. 33, no. 12, pp. 6999-7019, 2021.
[21]K. Uchida, M. Tanaka, and M. Okutomi, "Coupled convolution layer for convolutional neural network," Neural Networks, vol. 105, pp. 197-205, 2018.
[22]J. Si, S. L. Harris, and E. Yfantis, "A dynamic ReLU on neural network," in 2018 IEEE 13th Dallas Circuits and Systems Conference (DCAS), 2018: IEEE, pp. 1-6.
[23]P. Singh, P. Raj, and V. P. Namboodiri, "EDS pooling layer," Image and Vision Computing, vol. 98, p. 103923, 2020.
[24]D. Sun, J. Wulff, E. B. Sudderth, H. Pfister, and M. J. Black, "A fully-connected layered model of foreground and background flow," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 2451-2458.
[25]A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, "Deep learning for computer vision: A brief review," Computational intelligence and neuroscience, vol. 2018, no. 1, p. 7068349, 2018.
[26]Y. Zhang, X. Li, F. Wang, B. Wei, and L. Li, "A comprehensive review of one-stage networks for object detection," in 2021 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), 2021: IEEE, pp. 1-6.
[27]C. Liu, Y. Tao, J. Liang, K. Li, and Y. Chen, "Object detection based on YOLO network," in 2018 IEEE 4th information technology and mechatronics engineering conference (ITOEC), 2018: IEEE, pp. 799-803.
[28]S. Zhai, D. Shang, S. Wang, and S. Dong, "DF-SSD: An improved SSD object detection algorithm based on DenseNet and feature fusion," IEEE access, vol. 8, pp. 24344-24357, 2020.
[29]S. Nagaraj, B. Muthiyan, S. Ravi, V. Menezes, K. Kapoor, and H. Jeon, "Edge-based street object detection," in 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 2017: IEEE, pp. 1-4.
[30]Y. Huo, Z. Xu, S. Chen, Y. Chen, Y. Huang, and N. Zheng, "SqueezeDet-Based Nighttime Traffic Light Detection with Filtering Rules," in 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI), 2019: IEEE, pp. 285-291.
[31]M. A. Raza, H. Bint-e-Naeem, A. Yasin, and M. H. Yousaf, "Birdview retina-net: Small-scale object detector for unmanned aerial vehicles," in 2021 16th international conference on emerging technologies (ICET), 2021: IEEE, pp. 1-6.
[32]P. Ma, Y. Bai, J. Zhu, C. Wang, and C. Peng, "DSOD: DSO in dynamic environments," IEEE Access, vol. 7, pp. 178300-178309, 2019.
[33]L. Du, R. Zhang, and X. Wang, "Overview of two-stage object detection algorithms," in Journal of Physics: Conference Series, 2020, vol. 1544, no. 1: IOP Publishing, p. 012033.
[34]K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[35]K. He, X. Zhang, S. Ren, and J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904-1916, 2015.
[36]Y. Zhang and M. Chi, "Mask-R-FCN: A deep fusion network for semantic segmentation," IEEE Access, vol. 8, pp. 155753-155765, 2020.
[37]J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, "Scale-aware fast R-CNN for pedestrian detection," IEEE transactions on Multimedia, vol. 20, no. 4, pp. 985-996, 2017.
[38]S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137-1149, 2016.
[39]P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, "A Review of Yolo algorithm developments," Procedia computer science, vol. 199, pp. 1066-1073, 2022.
[40]J. Terven, D.-M. Córdova-Esparza, and J.-A. Romero-González, "A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas," Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680-1716, 2023.
[41]M. Sohan, T. Sai Ram, R. Reddy, and C. Venkata, "A review on yolov8 and its advancements," in International Conference on Data Intelligence and Cognitive Informatics, 2024: Springer, pp. 529-545.
[42]R. H. Hasan, R. M. Hassoo, and I. S. Aboud, "Yolo Versions Architecture," 2023.
[43]S. A. Magalhães et al., "Evaluating the single-shot multibox detector and YOLO deep learning models for the detection of tomatoes in a greenhouse," Sensors, vol. 21, no. 10, p. 3569, 2021.
[44]S. Madakam, R. Ramaswamy, and S. Tripathi, "Internet of Things (IoT): A literature review," Journal of Computer and Communications, vol. 3, no. 5, pp. 164-173, 2015.
[45]M. M. Ahemd, M. A. Shah, and A. Wahid, "IoT security: A layered approach for attacks & defenses," in 2017 international conference on Communication Technologies (ComTech), 2017: IEEE, pp. 104-110.
[46] 呂紹妤, "農業人口銳減,農村待解的缺工難題," 2021. From
https://ms-harvest.com/post20210912/
[47]Zoumana Keita, "An Introduction to Convolutional Neural Networks (CNNs)," 2023. From
https://www.datacamp.com/tutorial/introduction-to-convolutional-neural-networks-cnns#rdl
[48]Yeh James, "[資料分析&機器學習] 第5.1講: 卷積神經網絡介紹(Convolutional Neural Network)," 2017. From
https://medium.com/jameslearningnote/%E8%B3%87%E6%96%99%E5%88%86%E6%9E%90-%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E7%AC%AC5-1%E8%AC%9B-%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E7%B5%A1%E4%BB%8B%E7%B4%B9-convolutional-neural-network-4f8249d65d4f
[49]Allen Tzeng, "卷積神經網路 (Convolutional Neural , CNN)," 2019. From
https://hackmd.io/@allen108108/rkn-oVGA4
[50]sureZ-ok, "CNN之卷積、全連接、池化、softmax層," 2023. From
https://www.rvmcu.com/column-topic-id-1309.html
[51]Chris Huang, "物件偵測 #1 - 基礎概念," 2023. From
https://hackmd.io/@chrish0729/H1bzDiWCn
[52]Tommy Huang, "深度學習-什麼是one stage,什麼是two stage 物件偵測," 2018. From
https://chih-sheng-huang821.medium.com/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-%E4%BB%80%E9%BA%BC%E6%98%AFone-stage-%E4%BB%80%E9%BA%BC%E6%98%AFtwo-stage-%E7%89%A9%E4%BB%B6%E5%81%B5%E6%B8%AC-fc3ce505390f
[53]Zoumana Keita, " YOLO Object Detection Explained," 2022. From
https://www.datacamp.com/blog/yolo-object-detection-explained
[54] Ivan, "[物件偵測] S4: YOLO v1簡介," 2019. From
https://ivan-eng-murmur.medium.com/object-detection-s4-yolo-v1%E7%B0%A1%E4%BB%8B-f3b1c7c91ed
[55]Eric Chou, "YOLOv8 介紹與手把手訓練自訂義模型," 2023. From
https://medium.com/@EricChou711/yolov8-%E4%BB%8B%E7%B4%B9%E5%92%8C%E6%89%8B%E6%8A%8A%E6%89%8B%E8%A8%93%E7%B7%B4%E8%87%AA%E8%A8%82%E7%BE%A9%E6%A8%A1%E5%9E%8B-752d8d32cb73
[56]Claire Chang, " Yolov8 – 物件偵測模型," 2023. From
https://claire-chang.com/2023/08/16/yolov8-%E7%89%A9%E4%BB%B6%E5%81%B5%E6%B8%AC%E6%A8%A1%E5%9E%8B/
[57]AWS, "什麼是 IoT (物聯網)?," From
https://aws.amazon.com/tw/what-is/iot/
[58]ARCHITECTURE, "IoT Architecture – Detailed Explanation," 2022. From
https://www.interviewbit.com/blog/iot-architecture/

電子全文 電子全文(網際網路公開日期:20260801)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊