跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.80) 您好!臺灣時間:2024/12/12 18:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:塔韋叻
研究生(外文):SUPACHAI TAWEELERD
論文名稱:應用視覺系統的深度學習檢測生產中鑄造件:以潛水泵葉輪圖像為範例
論文名稱(外文):Vision System Based on Deep Learning for Product Inspection in Casting Manufacturing: Submersible Pump Impeller Images
指導教授:張仲卿
指導教授(外文):Chong-Ching Chang
口試委員:張仲卿陳鵬仁蘇科翰
口試委員(外文):Chong-Ching ChangPeng-Ren ChenKe-Han Su
口試日期:2022-01-21
學位類別:碩士
校院名稱:國立臺南大學
系所名稱:機電系統工程研究所碩士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:英文
論文頁數:97
中文關鍵詞:產品檢測視覺系統深度學習鑄造生產潛水泵
外文關鍵詞:Product inspectionVision systemsDeep learningCasting manufacturingSubmersible pump impeller
相關次數:
  • 被引用被引用:0
  • 點閱點閱:103
  • 評分評分:
  • 下載下載:23
  • 收藏至我的研究室書目清單書目收藏:0
產品檢測是製造的過程中最重要的步驟,因為它是產品交付到客戶手上之前的最後一步,實際上,主流的檢測方式概括分為兩大類,首先是依賴人的肉眼檢測,其次則是藉由視覺系統檢測,就肉眼檢測而言,人類的肉眼是種極其複雜的器官,它們經由感覺,人經系統與人類的大腦直接相連,並且使我們看見或感知周遭環境的視覺資訊,他們能夠指定物體的類別,僅須從範例中學習一次,甚至在短時間內辨識物體的外觀,然而人們的感覺仍就表現出一些缺點及侷限性,本質上,人眼受到物理上的限制,即大小與數量,這導致感受到的資訊量被限制住,它們僅運作在有限的頻率及波長範圍內,導致在感知高速運動及不可見光時會失效,進一步地說,肉眼往往容易受到強光及反射的干擾,從而導致失去焦距,因此應用視覺系統在檢測的過程中是一種試圖解決上述問題的方式,它有諸如速度提升、長時間作業下不會疲倦之類的優點,視覺系統在準確及可靠地產品檢測方面有著超越人類肉眼的一切,然而,這樣的系統仍有些許的不足,像是如同肉眼一般對閃光敏感還有設置條件的限制。
為了克服這些問題,本研究提出一種新穎的方式,應用視覺系統基於卷積神經網路(Convolutional Neural Networks, CNNs)的深度學習進行產品檢測,本研究的開發著重在軟體面,以潛水泵葉輪為範例(它是一種經由鑄造生產得到的產品),並在最後提出最佳化的深度學習模型,包含數種卷積層、池化層、全連接層以及其他參數(例如:批尺寸2、過濾器大小…等),另外也提出準備圖像資料的過程。本研究所提出的深度學習模型,在潛水泵葉輪圖像資料集中上,視圖的最佳檢測精度為100%,並且在訓練的過程中僅需要較少的運算能力及時間,進一步地說,分類一張圖像僅需花費57毫秒,本研究提供的軟體框架可以幫助產品檢測、缺陷偵測、品質管制及工業領域

A product inspection is the most important process in manufacturing as it is the last step before delivering products to customers. In practice, there are two main types of inspections. First, inspection by relying on human eyes, and the second one is by using vision systems. In terms of relying on human eyes, the human eyes are an especially complex instrument. They connect directly with the human brain through the sensory nervous system and allow us to see or perceive visual information of the surrounding environment. They are able to specify a category of an object by learning from an example only once or even recognize an object's shape in a second. However, human perception still manifests some weaknesses and limitations. By nature, human eyes are restricted by physical limitations, i.e., size and numbers, which result in limited perception quantity of information. They operate only in a limited range of frequency and wavelength. This results in failure on perceiving high-speed movement and non-visible light. Moreover, the eyes are tended to be sensitive to disturb glare and reflection and leading to the loss of focal. So, applying vision systems into the inspection process is a way of trying to solve these problems. It increases speed, there is no tiredness in a long working situation, and so on. Vision systems have everything it takes to surpass the human eyes when it comes to accurate and reliable product inspection. However, these systems still have some weaknesses such as sensitivity to lightning same as human eyes and setup conditions.
To overcome the problems, this research presents a novel approach using a vision system based on deep learning with convolutional neural networks (CNNs) for product inspection. The development in this research is the software part. The test case product is a submersible pump impeller. It is a product that is obtained from casting manufacturing. In the end, the optimal deep learning model is proposed. It includes a number of convolutional layers, pooling layers, fully connected layers, and other parameters (e. g. bath size, filter size, etc.). In addition, the procedure to prepare images data is also proposed.
The proposed deep learning model achieved the best accuracy of results as 100% on the top view of the submersible pump impeller images dataset, and it requires less computational power and time in the training process. Moreover, it takes only 57 milliseconds for classifying one image. The proposed framework can help for product inspections, defects detections, quality control fields, and industries.
摘要 i
ABSTRACT iii
ACKNOWLEDGMENTS v
ORIGINAL PAPER vi
TABLE OF CONTENTS vii
LIST OF TABLES ix
LIST OF FIGURES x
NOMENCLATURES xii
CHAPTER 1 INTRODUCTION 1
1.1 Literature review 1
1.2 Motivations 3
1.3 Purposes 4
1.4 Thesis outline 4
CHAPTER 2 METHODOLOGIES 5
2.1 Related theories 6
2.1.1 Convolutional neural networks (CNNs) 6
2.1.2 Activation function 10
2.2 Image dataset 12
2.3 Image preprocessing 12
2.3.1 Data augmentation 13
2.4 Based model selection 14
2.4.1 VGGNet Architecture 14
2.5 Model design 15
2.6 Model training and validation 16
2.7 Optimization 17
2.8 Details of final selected model 18
CHAPTER 3 EQUIPMENT 33
3.1 Hardware 33
3.1.1 Computer notebook 33
3.2 Software 33
3.2.1 Python 33
3.2.2 TensorFlow 34
3.2.3 Keras 34
3.2.4 Visual Studio Code 34
CHAPTER 4 RESULTS AND DISCUSSIONS 36
4.1 Performance metrics 36
4.1.1 Accuracy 36
4.1.2 Confusion matrix 37
4.1.3 Precision 38
4.1.4 Recall 38
4.1.5 F1 score 39
4.2 Comparison 39
4.2.1 Comparison of the models in this research 39
4.2.2 Comparison of the proposed model to related researches 41
4.3 The proposed model’s results in classification 41
CHAPTER 5 CONCLUSIONS 78
CHAPTER 6 FUTURE WORKS 79
REFERENCES 80

R. D. Ahirrao and M. B.G, "A Review Paper on Analysis and Opimization of Aluminium Casting Parameters," International Research Journal of Engineering and Technology (IRJET), vol. 06, no. 02, p. 1132, 2019.
S. S. Clayton, M. Martin, W. Eric, P. P. Jo, S. Benjamin, G. D. Shaun and T. Geoff, "Open-source automated centrifugal pump test rig," HardwareX, vol. 8, 2020.
B. C. Kandpal, N. Johri, B. Kumar, A. Patel, P. Pachouri, M. Alam, P. Talwar, M. K. Sharma and S. Sharma, "Experimental study of foundry defects in aluminium castings for quality improvement of casting," Materials Today: Proceedings, 2021.
B. Chatrad, N. Kammar, P. P. Kulkarni and S. Patil, "A Study on Minimization of Critical Defects in Casting Process Considering Various Parameters," International Journal of Innovative Research in Science, Engineering and Technology, vol. 5, no. 5, 2016.
A. Hameed, F. Khan and S. Ahmed, "A risk-based shutdown inspection and maintenance interval estimation considering human error," Process Safety and Environmental Protection, vol. 100, pp. 9-21, 2016.
Y. Yang, Z.-J. Zha, M. Gao and Z. He, "A robust vision inspection system for detecting surface defects of film capacitors," Signal Processing, vol. 124, pp. 54-62, 2016.
N. Baudet, J. L. Maire and M. Pillet, "The visual inspection of product surfaces," Food Quality and Preference, vol. 27, pp. 153-160, 2013.
W. Peng, J. Xie, Z. Gu, Q. Liao and X. Huang, "A high performance real-time vision system for curved surface inspection," Optik, vol. 232, 2021.
H. Golnabi and A. Asadpour, "Design and application of industrial machine vision systems," Robotics and Computer-Integrated Manufacturing, vol. 23, pp. 630-637, 2007.
J. Appa Rao, J. Babu Rao, S. Kamaluddin, M. M. M. Sarcar and N. R. M. R. Bhargava, "Studies on cold workability limits of pure copper using machine vision system and its finite element analysis," Materials and Design, vol. 30, p. 2143–2151, 2009.
I. S. Chang, J. I. Lee, S. H. Rhee and K. W. Um, "A Study on the Application of the Joint Tracking System to Multipass Arc Welding Using Vision Sensor," IFAC Proceedings Volumes, vol. 30, no. 14, pp. 325-332, 1997.
S. Pagano, R. Russo and S. Savino, "A vision guided robotic system for flexible gluing process in the footwear industry," Robotics and Computer Integrated Manufacturing, vol. 65, p. 101965, 2020.
K. Xue, Z. Wang, J. Shen, S. Hu, Y. Zhen, J. Liu, D. Wu and H. Yang, "Robotic seam tracking system based on vision sensing and human-machine interaction for multi-pass MAG welding," Journal of Manufacturing Processes, vol. 63, pp. 48-59, 2021.
S. A. Oyewole and O. O. Olugbara, "Product image classification using Eigen Colour feature with ensemble machine learning," Egyptian Informatics Journal, vol. 19, no. 2, pp. 83-100, 2018.
X.-w. Zhang, Y.-q. Ding , Y.-y. Lv, A.-y. Shi and R.-y. Liang, "A vision inspection system for the surface defects of strongly reflected metal based on multi-class SVM," Expert Systems with Applications, vol. 38, p. 5930–5939, 2011.
H. Yuan, J. Li, L. L. Lai and Y. Y. Tang, "Low-rank matrix regression for image feature extraction and feature selection," Information Sciences, vol. 522, p. 214–226, 2020.
B. Das, S. Bag and S. Pal, "Defect detection in friction stir welding process through characterization of signals by fractal dimension," Manufacturing Letters, vol. 7, pp. 6-10, 2016.
W. J. Zhang, G. Yang, Y. Lin, C. Ji and M. M. Gupta, "On Definition of Deep Learning," in 2018 World Automation Congress (WAC), Washington, 2018.
J. Wang, Y. Ma, L. Zhang, R. X. Gao and D. Wu, "Deep Learning for smart manufacturing: Methods and applications," Journal of Manufacturing, vol. 48, pp. 144-156, 2018.
O. Kelly and H. Jacqueline, "Object Detection using Convolutional Neural Networks for Smart Manufacturing Vision Systems in the Medical Devices Sector," 29th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM2019), pp. 142-147, 2019.
A. R. Pathak, M. Pandey and S. Rautaray, "Application of Deep Learning for Object Detection," Procedia Computer Science, vol. 132, pp. 1706-1717, 2018.
K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
G. v. Rossum, "Python," Python Software Foundation, [Online]. Available: https://www.python.org/. [Accessed February 1991].
G. B. T. "TensorFlow," Google. [Online]. [Accessed 9 November 2015].
C. François, "Keras," Google, [Online]. Available: https://keras.io. [Accessed 27 March 2015].
W. M. Akram, . G. Li, Y. Jin, X. Chen, C. Zhu, X. Zhao, A. Khaliq, F. M and A. Ahmad, "CNN based automatic detection of photovoltaic cell defects in electroluminescence images," Energy, vol. 189, 2019.
D. H. Hubel, "Single Unit Activity in Striate Cortex of Unrestrained Cats," The Journal of Physiology, vol. 147, pp. 226-238, 1959.
D. H. Hubel and T. Wiesel, "Receptive Fields of Single Neurons in the Cat’s Striate Cortex," The Journal of Physiology, vol. 148, pp. 574-591, 1959.
M. LaFirenza, "Deepai," 2017. [Online]. Available: https://deepai.org/machine-learning-glossary-and-terms/activation-function.
"Kaggle," Google LLC, 2010. [Online]. Available: https://www.kaggle.com/. [Accessed June 2020].
I. C. and W. G. , "OpenCV," Intel Corporation, [Online]. Available: https://opencv.org. [Accessed June 2000].
J. D. Hunter, "Matplotlib," Michael Droettboom, et al, [Online]. Available: https://matplotlib.org. [Accessed 2003].
K. Simonyan and A. Zisserman, "VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION," in ICLR, San Diego, 2015.
M. "Visual Studio Code," Microsoft, [Online]. Available: https://code.visualstudio.com. [Accessed 2015 April 2015].
D. Cournapeau, "Scikit-Learn," [Online]. Available: https://scikit-learn.org. [Accessed June 2007].
T. Oliphant, "NumPy," [Online]. Available: https://www.numpy.org. [Accessed 2006].
Y. Lei, B. Yang, X. Jiang, F. Jia, N. Li and A. K. Nandi, "Applications of machine learning to machine fault diagnosis: A review and roadmap," Mechanical Systems and Signal Processing, vol. 138, 2020.
K. Kumar, "Simplilearn," 2010. [Online]. Available: https://www.simplilearn.com/tutorials/deep-learning-tutorial/what-is-tensorflow.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top