(44.192.66.171) 您好!臺灣時間:2021/05/18 01:27
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:程子寬
研究生(外文):Zi-Kuan Cheng
論文名稱:基於遷移學習智慧工廠生產品質預測
論文名稱(外文):Manufacturing Quality Prediction with Transfer Learning for Smart Factories
指導教授:江振瑞江振瑞引用關係
指導教授(外文):Jehn-Ruey Jiang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:77
中文關鍵詞:工業4.0智慧製造遷移學習線切割放電加工品質預測表面粗糙度
外文關鍵詞:Industry 4.0Smart ManufacturingTransfer LearningWEDMQuality PredictionSurface Roughness
相關次數:
  • 被引用被引用:0
  • 點閱點閱:40
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近年來由於資訊科技的快速發展,帶動全球許多產業朝向數位化轉型,許多傳統製造業也逐步發展工業4.0(Industry 4.0)智慧工廠。這些工廠希望使用物聯網(Internet of Things, IoT)、大數據(big data)、機器學習(machine learning)以及雲端計算(cloud computing)等先進技術,實現智慧製造(smart manufacturing)的目標。如此一來,製造業對於生產系統上的產品生產品質能有更好的管控,達成更高的良率。本論文聚焦於線切割放電加工(wire electrical discharge machining, WEDM)產品的表面粗糙度(surface roughness)預測,使用遷移學習(transfer learning)技術於兩種不同工件材質,其中一個是資料數量較多的材質A工件,在此視為來源域(source domain);而另一個則是資料數量較少的材質B工件,在此視為目標域(target domain)。
具體而言,本論文先藉由數量較多的材質A工件資料,在加工之前透過靜態生產參數,並在加工之後透過靜態生產參數以及動態機台狀態資料,訓練出可以預測工件表面粗糙度的神經網路模型,包括深度神經網路(deep neural network, DNN)以及門控循環單元(gate recurrent unit, GRU)與深度神經網路結合的神經網路預測模型。然後使用兩種遷移學習作法:第一種為權重鎖定(weight-freezing)方法,藉由數量較少的材質B工件資料,將原先提供訓練好可以預測材質A工件表面粗糙度的神經網路模型,轉換為可以預測材質B工件表面粗糙度的神經網路模型。第二種遷移學習為多任務學習(multitask learning):藉由共同使用數量較多的材質A工件資料以及數量較少的材質B工件資料,同時訓練出可以預測材質A以及材質B工件表面粗糙度的個別神經網路模型。實驗結果顯示,二種遷移學習方法都可以充分利用數量較少的目標域資料,有效提升來源域與目標域的神經網路模型的預測準確度。
With the rapid development of information technology in recent years, many industries around the world are moving toward digital transformation, and many traditional manufacturing industries are also gradually developing Industry 4.0 smart factories. The factories intend to utilize advanced techniques such as the Internet of Things (IoT), big data, machine learning, and cloud computing, to reach the goal of smart manufacturing. In this manner, the product quality is better controlled and the yield rate is improved. This paper focuses on the prediction of the surface roughness (SR) of wire electrical discharge machining (WEDM) workpieces with transfer learning techniques for two different materials, denoted as material-A and material-B. The data of material-A workpiece are regarded as source domain data, whose amount is larger, whereas the data of material-B workpiece are regarded as target domain data, whose amount is smaller.
First, material-A workpiece data are applied to train neural network models for SR prediction. Specifically, static manufacturing parameters are used to train deep neural networks (DNNs) for SR prediction before manufacturing, whereas static manufacturing parameters along with dynamic manufacturing conditions are used to train gate recurrent unit (GRU) networks for SR prediction after manufacturing. Afterwards, two transfer learning methods are utilized. The first method is weight-freezing. It uses a small amount of material-B workpiece data to train a neural network model that can predict surface roughness of material-B workpiece on the basis of well-trained models that can predict surface roughness of material-A workpiece. The second method is multi-task learning. It uses both material-A workpiece data, whose amount is larger, and material-B workpiece data, whose amount is smaller, to train separate neural network models for SR prediction for material-A and material-B workpieces, respectively. The experimental results show that the two transfer learning methods both can efficiently improve prediction accuracy of source domain and target domain neural network models through using only a small amount of target domain data.
中文摘要 ... I
Abstract ... II
誌謝 ... III
目錄 ... IV
圖目錄 ... VI
表目錄 ... VIII
一、緒論 ... 1
1.1. 研究背景與動機 ... 1
1.2. 研究目的與貢獻 ... 2
1.3. 相關文獻探討 ... 2
1.4. 論文架構 ... 3
二、背景知識 ... 4
2.1. 遷移學習 ... 4
2.2. 線切割放電加工法 ... 13
2.2.1. 放電加工法介紹 ... 13
2.2.2. 線切割放電加工法介紹 ... 14
2.3. 馬可夫鏈與狀態轉移機率矩陣 ... 15
2.4. 門控循環單元 ... 18
三、問題定義與研究 ... 23
3.1. 問題定義 ... 23
3.2. 表面粗糙度 ... 25
3.3. 資料前處理 ... 28
3.3.1. 取得可用時序資料 ... 28
3.3.2. 建立馬可夫鏈狀態轉移特徵 ... 28
3.3.3. 少數時序資料的例外處理 ... 31
3.4. 模型架構與參數設置 ... 33
3.4.1. 模型架構 ... 33
3.4.2. 模型參數設置 ... 34
3.5. 遷移學習作法 ... 37
3.5.1. 權重鎖定的遷移學習 ... 37
3.5.2. 多任務學習 ... 39
四、實驗與評估 ... 42
4.1. 實驗環境 ... 42
4.2. 實驗結果與評估 ... 43
4.2.1. 權重鎖定的訓練 ... 44
4.2.1.1. 加工前之預測模型 ... 44
4.2.1.2. 加工後之預測模型 ... 47
4.2.1.3. 模型執行時間 ... 52
4.2.2. 多任務學習訓練 ... 53
4.2.2.1. 加工前之預測模型 ... 54
4.2.2.2. 加工後之預測模型 ... 56
五、結論與未來展望 ... 59
參考文獻 ... 60
[1] C. L. Fan, and J. R. Jiang, "Surface Roughness Prediction Based on Markov Chain and Deep Neural Network for Wire Electrical Discharge Machining," in Proc. of the 2019 IEEE Eurasia Conference on IOT, Communication and Engineering, Oct. 2019.
[2] A. Mandal, and A. R. Dixit, "State of Art in Wire Electrical Discharge Machining Process and Performance," International Journal of Machining and Machinability of Materials, Vol. 16, No. 1, pp. 1-21, Jan. 2014.
[3] U. Esme, A. Sagbas, and F. Kahraman, "Prediction of Surface Roughness in Wire Electrical Discharge Machining Using Design of Experiments and Neural Networks," Iranian Journal of Science & Technology Transaction B: Engineering, Vol. 33, No. 3, pp. 231-240, June 2009.
[4] J. Kumar, "Prediction of Surface Roughness in Wire Electric Discharge Machining (WEDM) Process based on Response Surface Methodology," International Journal of Engineering and Technology, Vol. 2, No. 4, pp. 708-712, Jan. 2012.
[5] G. E. P. Box, and D. W. Behnken, "Some New Three Level Designs for the Study of Quantitative Variables," Technometrics, Vol. 2, No. 4, pp. 455-475, Nov. 1960.
[6] L. Y. Pratt, "Discriminability-Based Transfer between Neural Networks," in Proc. of the 5th International Conference on Neural Information Processing Systems, pp. 204-211, Nov. 1992.
[7] S. J. Pan, and Q. Yang, "A Survey on Transfer Learning," IEEE Transactions on Knowledge and Data Engineering, Vol. 22, No. 10, pp. 1345-1359, Oct. 2010.
[8] Y. Ganin, and V. Lempitsky, "Unsupervised Domain Adaptation by Backpropagation," in Proc. of the 32nd International Conference on International Conference on Machine Learning, Vol. 37, pp. 1180-1189, July 2015.
[9] MNIST-M dataset for Keras, https://github.com/VanushVaswani/keras_mnistm/releases, accessed in June 2020.
[10] A. S. Qureshi, A. Khan, A. Zameer, and A. Usman, "Wind Power Prediction using Deep Neural Network based Meta Regression and Transfer Learning," Applied Soft Computing, Vol. 58, pp. 742-755, May 2017.
[11] A. S. Qureshi, and A. Khan, "Adaptive Transfer Learning in Deep Neural Networks: Wind Power Prediction using Knowledge Transfer from Region to Region and Between Different Task Domains," Computational Intelligence, Oct. 2018.
[12] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, "Domain Adaptation via Transfer Component Analysis," IEEE Transactions on Neural Networks, Vol. 22, No. 2, pp. 199-210, Feb. 2011.
[13] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, "How Transferable Are Features in Deep Neural Networks?," in Proc. of the 27th International Conference on Neural Information Processing Systems, Vol. 2, pp. 3320-3328, Dec. 2014.
[14] R. Tong, L. Wang, and B. Ma, "Transfer Learning for Children's Speech Recognition", in Proc. of 2017 International Conference on Asian Language Processing, Dec. 2017.
[15] Z. Y. He, H. D. Shao, X. Y. Zhang, J. S. Cheng, and Y. Yang, "Improved Deep Transfer Auto-Encoder for Fault Diagnosis of Gearbox Under Variable Working Conditions with Small Training Samples," IEEE Access, Vol. 7, Aug. 2019.
[16] Y. Xu, J. Du, L. R. Dai, and C. H. Lee, "Cross-language Transfer Learning for Deep Neural Network Based Speech Enhancement," in proc. of the 9th International Symposium on Chinese Spoken Language Processing, Oct. 2014.
[17] C. T. Lin, Y. R. Wang, S. H. Chen, and Y. F. Liao, "A Preliminary Study on Cross-Language Knowledge Transfer for Low-Resource Taiwanese Mandarin ASR," in Proc. of 2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques, Oct. 2016.
[18] D. X. Dong, H. Wu, W. He, D. H. Yu, and H. F. Wang, "Multi-Task Learning for Multiple Language Translation," in Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Vol. 1, pp. 1723-1732, July 2015.
[19] J. T. Huang, J. Y. Li, D. Yu, L. Deng, and Y. F. Gong, "Cross-Language Knowledge Transfer using Multilingual Deep Neural Network with Shared Hidden Layers," in Proc. of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013.
[20] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain-Adversarial Training of Neural Networks," Journal of Machine Learning Research 2016, Vol. 17, pp. 1-35, May 2016.
[21] H. Daumé III, "Frustratingly Easy Domain Adaptation," in Proc. of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 256-263, June 2007.
[22] S. H. Chen, C. W. Lu, M. C. Lu, and C. P. Wang, "Parameter Design and Planning by WEDM," in 2013 Conference on Green Technology Engineering and Application, May 2013.
[23] R. Y. Xiao, and Z. Y. Xie, "放電加工原理和應用-線切割放電加工," https://wenku.baidu.com/view/7bf4db436bd97f192379e953.html?re=view, accessed in June 2020.
[24] H. Ozkan, F. Ozkan, and S. S. Kozat, "Online Anomaly Detection Under Markov Statistics with Controllable Type-I Error," IEEE Transactions on Signal Processing, Vol. 64, pp. 1435-1445, Mar. 2016. [25] K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," Empirical Methods in Natural Language Processing, Sep. 2014.
[26] Understanding GRU Networks, https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be, accessed in June 2020.
[27] SURFCOM TOUCH - Intuitively Operated Surface Texture Measuring Instruments, https://www.msiviking.com/documents/ZEISS/form-and-surface/Zeiss_Surfcom-Touch.pdf, accessed in June 2020.
[28] E. P. DeGramo, J. T. Black, and R. A. Kohser, Materials and Processes in Manufacturing, 9th Edition, Wiley, ISBN 0-471-65653-4., Dec. 2003.
[29] L. J. Chen, "表面粗糙度及其量測," http://140.112.14.7/~measlab/course/101%E4%B8%8A/%E7%B2%BE%E5%AF%86%E9%87%8F%E6%B8%AC/%E8%A1%A8%E9%9D%A2%E7%B2%97%E5%BA%A6%E9%87%8F%E6%B8%AC%E5%8E%9F%E7%90%86%E8%88%87%E6%8A%80%E8%A1%93%20(NTU%202012).pdf, accessed in June 2020.
[30] 表面粗糙度的參數 - 最大高度, https://www.keyence.com.tw/ss/products/microscope/roughness/line/tab01_b.jsp, accessed in June 2020.
[31] 表面粗度(Surface Roughness), http://dragon.ccut.edu.tw/~mejwc1/p-mea/content/ch_18.pdf, accessed in June 2020.
[32] 表面粗糙度的參數 - 算術平均高度, https://www.keyence.com.tw/ss/products/microscope/roughness/line/parameters.jsp, accessed in June 2020.
[33] D. Zang, J. H. Liu, and H. Z. Wang, "Markov Chain-Based Feature Extraction for Anomaly Detection in Time Series and Its Industrial Application," in Proc. of 2018 Chinese Control And Decision Conference, pp. 1059-1063, June 2018.
[34] G. Klambauer, T. Unterthiner, A. Mayer, and S. Hochreiter, "Self-Normalizing Neural Networks," in Proc. of the 30th International Conference on Neural Information Processing Systems, pp. 972-981, Sep. 2017.
[35] D. P. Kingma, and J. Ba, "Adam: A Method for Stochastic Optimization," in Proc. of the 3rd International Conference for Learning Representations, Dec. 2014.
電子全文 電子全文(網際網路公開日期:20230801)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top