跳到主要內容

臺灣博碩士論文加值系統

(44.213.63.130) 您好!臺灣時間:2023/02/01 01:22
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳柏穎
研究生(外文):Chen, Po-Yin
論文名稱:利用機器學習進行薄膜式日間輻射冷卻材料之優化設計
論文名稱(外文):Using Machine Learning for Optimal Design of Thin Film Based Daytime Radiative Cooling Materials
指導教授:羅友杰
指導教授(外文):Lo, Yu-Chieh
口試委員:陳學禮萬德輝陳南佑楊安正
口試委員(外文):Chen, Hsuen-LiWan, DehuiChen, Nan-YowYang, An-Cheng
口試日期:2020-7-14
學位類別:碩士
校院名稱:國立交通大學
系所名稱:材料科學與工程學系所
學門:工程學門
學類:材料工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:67
中文關鍵詞:機器學習人工神經網路自動編碼器日間輻射冷卻
外文關鍵詞:Machine LearningArtificial Neural NetworkAutoencoderDaytime Radiative Cooling
相關次數:
  • 被引用被引用:0
  • 點閱點閱:104
  • 評分評分:
  • 下載下載:5
  • 收藏至我的研究室書目清單書目收藏:0
近年來,隨著人工智慧的興起,數據科學與專業知識的結合也開始於材料領域蓬勃發展,透過大數據的分析,材料學家更能找出適合的答案並應用在實驗中,材料的特性具有高度多樣化,而本研究將著重於探討材料的光學性質並且將機器學習應用於該性質。在電子元件中,核心處理器以及機體整體的溫度會對其效能有巨大影響,若溫度過高可能會導致效率降低甚至熱當的情形發生,因此若是在使用太陽能電子元件時,如何有效處理陽光照射所造成的升溫就顯得十分重要,而影響溫度的材料光學性質包含了材料折射率(n)、消光係數(k)和薄膜的厚度(d),我們將上述參數透過光學薄膜模型,計算出薄膜材料的反射率、穿透率和吸收率光譜,並且再利用大氣輻射冷卻理論,模擬出了該薄膜最終的平衡溫度和冷卻功率,而為了方便大量生產資料,我們將該模擬計算過程以MATLAB程式語言寫出,並且利用生產出的大量數據來進行機器學習。本研究使用的機器學習方法為人工神經網路,透過自動編碼器(Autoencoder)這種神經網路結構,將折射率、消光係數和薄膜厚度等材料特徵輸入進自動編碼器,利用自動編碼器中前半段的編碼器(Encoder)將該特徵壓縮成平衡溫度和冷卻功率兩個參數,再把平衡溫度和冷卻功率利用解碼器(Decoder)解壓縮還原成原本的n、k光譜和薄膜厚度,訓練完成後,我們便可以利用解碼器,輸入希望達到的平衡溫度和冷卻功率,並使用人工神經網路預測一個理想的光譜,並且讓實驗端以此作為參考,達到藉由機器學習優化材料光譜的目的。
Recently, with the rise of artificial intelligence, the combination of data science and domain knowledge in the field of materials science is also well developed. Through the analysis of big data, materials scientists can find suitable answers and apply them in experiments. The properties of materials are highly diverse, in this research, we will focus on the optical properties of materials and applied the method of machine learning to it. As we know, the temperature of the central processing unit (CPU) in electronic devices could significantly impact the performance of the devices. If the temperature is too high, it may contribute to the reduction of efficiency, or even lead to the shutdown of devices. Therefore, when using solar-powered devices, how we effectively deal with the rise of temperature caused by sunlight is an issue that can’t be ignored. The optical properties of materials that affect temperature include refractive index (n), extinction coefficient (k), and thin-film thickness (d). We calculate the reflectance, transmittance, and absorbance spectra with an optical thin film model using the optical parameters mentioned above, then we use daytime cooling theory to get the equilibrium temperature and cooling power of the thin film by absorbance spectrum. To generate a big amount of data easily, we write the calculation process in MATLAB programming language and use the large amount of data produced for machine learning. The machine learning method used in this study is “artificial neural network”. Through the neural network structure of autoencoder, material characteristics such as refractive index (n), extinction coefficient (k), and film thickness (d) are input into the autoencoder. We choose the encoder in the first part of the autoencoder and training it, make it able to compress those features into two parameters which are equilibrium temperature and cooling power, and then use the decoder to decompress the equilibrium temperature and cooling power to regain the original n, k spectra, and thin-film thickness. After the training is completed, we can input the desired equilibrium temperature and cooling power into the decoder, and use it to predict ideal spectra, and let the scientists use the spectra to optimize the material spectrum by machine learning.
摘要 i
Abstract ii
誌謝 iv
目錄 v
表目錄 vii
圖目錄 viii
第一章、 緒論 1
1.1 前言 1
1.2 研究目的 1
1.3 文獻回顧 2
1.3.1日間輻射冷卻理論 2
1.3.2 機器學習於材料領域之應用 7
1.3.3 自動編碼器之應用 10
第二章、光學模擬計算方法 14
2.1 光學薄膜模擬 14
2.1.1 介面反射和穿透 14
2.1.2 特徵矩陣運算 20
2.1.3 有效介質近似理論 22
2.2 大氣輻射平衡理論 24
第三章、機器學習理論 27
3.1 人工神經網路 27
3.2 激活函數 29
3.2.1 Sigmoid 29
3.2.2 Hyperbolic Tangent (Tanh) 30
3.2.3 Rectified Linear Unit (ReLU) 30
3.3 損失函數 31
3.3.1 平均絕對誤差 (MAE) 32
3.3.2 均方誤差 (MSE)和均方根誤差 (RMSE) 32
3.4 反向傳播 33
3.5 優化器 33
3.5.1梯度下降法(Gradient Descent) 34
3.5.2 Momentum 35
3.5.3 AdaGrad 36
3.5.4 Adam 37
3.6 批標準化 38
3.7 過擬合 & Dropout 40
第四章、模型設置和結果討論 43
4.1 光學模型設置 43
4.2 資料預處理 44
4.3 機器學習模型設置 47
4.3.1 人工神經網路架構 48
4.3.2 訓練參數設置 50
4.4 結果與討論 51
4.4.1光學模擬結果與討論 51
4.4.2 機器學習結果與討論 52
第五章、結論與未來工作 60
5.1 結論 60
5.2 未來工作 61
第六章、參考文獻 62
1. Li, T., et al., A radiative cooling structural material. Science, 2019. 364(6442): p. 760-763.
2. Raman, A.P., et al., Passive radiative cooling below ambient air temperature under direct sunlight. Nature, 2014. 515(7528): p. 540-544.
3. Shi, N.N., et al., Nanostructured fibers as a versatile photonic platform: radiative cooling and waveguiding through transverse Anderson localization. Light: Science & Applications, 2018. 7(1): p. 1-9.
4. Zhai, Y., et al., Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling. Science, 2017. 355(6329): p. 1062.
5. Zhou, L., et al., A polydimethylsiloxane-coated metal structure for all-day radiative cooling. Nature Sustainability, 2019. 2(8): p. 718-724.
6. Shi, N.N., et al., Keeping cool: Enhanced optical reflection and radiative heat dissipation in Saharan silver ants. Science, 2015. 349(6245): p. 298-301.
7. Liu, Y., et al., Materials discovery and design using machine learning. Journal of Materiomics, 2017. 3(3): p. 159-177.
8. Schmidt, J., et al., Recent advances and applications of machine learning in solid-state materials science. npj Computational Materials, 2019. 5(1): p. 1-36.
9. Stein, H.S., et al., Machine learning of optical properties of materials–predicting spectra from images and images from spectra. Chemical science, 2019. 10(1): p. 47-55.
10. Pilania, G., et al., Accelerating materials property predictions using machine learning. Sci Rep, 2013. 3: p. 2810.
11. Bengio, Y., et al. Greedy layer-wise training of deep networks. in Advances in neural information processing systems. 2007.
12. Hinton, G.E. and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. science, 2006. 313(5786): p. 504-507.
13. Chen, M., et al., Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683, 2012.
14. Cho, K., Boltzmann machines and denoising autoencoders for image denoising. arXiv preprint arXiv:1301.3468, 2013.
15. Cho, K. Simple sparsification improves sparse denoising autoencoders in denoising highly corrupted images. in International Conference on Machine Learning. 2013.
16. Gondara, L. Medical image denoising using convolutional denoising autoencoders. in 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). 2016. IEEE.
17. Vincent, P., A connection between score matching and denoising autoencoders. Neural computation, 2011. 23(7): p. 1661-1674.
18. Vincent, P., et al. Extracting and composing robust features with denoising autoencoders. in Proceedings of the 25th international conference on Machine learning. 2008.
19. Vincent, P., et al., Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 2010. 11(12).
20. Xing, C., L. Ma, and X. Yang, Stacked denoise autoencoder based feature extraction and classification for hyperspectral images. Journal of Sensors, 2016. 2016.
21. Akcay, S., A. Atapour-Abarghouei, and T.P. Breckon. GANomaly: Semi-supervised Anomaly Detection via Adversarial Training. in Computer Vision – ACCV 2018. 2019. Cham: Springer International Publishing.
22. An, J. and S. Cho, Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, 2015. 2(1): p. 1-18.
23. Ribeiro, M., A.E. Lazzaretti, and H.S. Lopes, A study of deep convolutional auto-encoders for anomaly detection in videos. Pattern Recognition Letters, 2018. 105: p. 13-22.
24. Sakurada, M. and T. Yairi, Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction, in Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis. 2014, Association for Computing Machinery: Gold Coast, Australia QLD, Australia. p. 4–11.
25. Zhou, C. and R.C. Paffenroth, Anomaly Detection with Robust Deep Autoencoders, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017, Association for Computing Machinery: Halifax, NS, Canada. p. 665–674.
26. Lee, Y.-C., Y.-C. Tseng, and H.-L. Chen, Single type of nanocavity structure enhances light outcouplings from various two-dimensional materials by over 100-fold. ACS Photonics, 2017. 4(1): p. 93-105.
27. Wang, J., H. He, and D.V. Prokhorov, A folded neural network autoencoder for dimensionality reduction. Procedia Computer Science, 2012. 13: p. 120-127.
28. Wang, W., et al. Generalized autoencoder: A neural network framework for dimensionality reduction. in Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014.
29. Wang, Y., H. Yao, and S. Zhao, Auto-encoder based dimensionality reduction. Neurocomputing, 2016. 184: p. 232-242.
30. Zabalza, J., et al., Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing, 2016. 185: p. 1-10.
31. Tsai, H.-K. and M.W. Miles, Optical interference display panel. 2007, Google Patents.
32. Zou, X., L.J. Wang, and L. Mandel, Induced coherence and indistinguishability in optical interference. Physical review letters, 1991. 67(3): p. 318.
33. Jackson, R. and V. Zamlynny, Optimization of electrochemical infrared reflection absorption spectroscopy using Fresnel equations. Electrochimica acta, 2008. 53(23): p. 6768-6777.
34. Skaar, J., Fresnel equations and the refractive index of active media. Physical Review E, 2006. 73(2): p. 026605.
35. Kovalenko, S., Descartes-Snell law of refraction with absorption. Semiconductor Physics Quantum Electronics & Optoelectronics, 2001.
36. Walpita, L., Solutions for planar optical waveguide equations by selecting zero elements in a characteristic matrix. JOSA A, 1985. 2(4): p. 595-602.
37. Wöhler, H., et al., Characteristic matrix method for stratified anisotropic media: optical properties of special configurations. JOSA A, 1991. 8(3): p. 536-540.
38. Chýlek, P., et al., Scattering of electromagnetic waves by composite spherical particles: experiment and effective medium approximations. Applied Optics, 1988. 27(12): p. 2396-2404.
39. Stroud, D., The effective medium approximations: Some recent developments. Superlattices and microstructures, 1998. 23(3-4): p. 567-573.
40. Macleod, H.A., Thin-film optical filters. 2017: CRC press.
41. Markel, V.A., Introduction to the Maxwell Garnett approximation: tutorial. JOSA A, 2016. 33(7): p. 1244-1256.
42. Niklasson, G.A., C.G. Granqvist, and O. Hunderi, Effective medium models for the optical properties of inhomogeneous materials. Applied Optics, 1981. 20(1): p. 26-30.
43. Bengio, Y., Practical Recommendations for Gradient-Based Training of Deep Architectures, in Neural Networks: Tricks of the Trade: Second Edition, G. Montavon, G.B. Orr, and K.-R. Müller, Editors. 2012, Springer Berlin Heidelberg: Berlin, Heidelberg. p. 437-478.
44. Curry, B. and D.E. Rumelhart, MSnet: A Neural Network which Classifies Mass Spectra. Tetrahedron Computer Methodology, 1990. 3(3): p. 213-237.
45. Folkes, S.R., O. Lahav, and S.J. Maddox, An artificial neural network approach to the classification of galaxy spectra. Monthly Notices of the Royal Astronomical Society, 1996. 283: p. 651.
46. Glorot, X., A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. in Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011.
47. Kamath, A., et al., Neural networks vs Gaussian process regression for representing potential energy surfaces: A comparative study of fit quality and vibrational spectrum accuracy. The Journal of chemical physics, 2018. 148(24): p. 241702.
48. Kobayashi, R., et al., Neural network potential for Al-Mg-Si alloys. Physical Review Materials, 2017. 1(5): p. 053604.
49. Lee, S.C. and S.W. Han, Neural-network-based models for generating artificial earthquakes and response spectra. Computers & structures, 2002. 80(20-21): p. 1627-1638.
50. Park, W.B., et al., Classification of crystal structure using a convolutional neural network. IUCrJ, 2017. 4(4): p. 486-494.
51. Tanabe, K., T. Tamura, and H. Uesaka, Neural Network System for the Identification of Infrared Spectra. Applied Spectroscopy, 1992. 46(5): p. 807-810.
52. Hecht-Nielsen, R., Theory of the backpropagation neural network, in Neural networks for perception. 1992, Elsevier. p. 65-93.
53. Ito, Y., Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Networks, 1991. 4(3): p. 385-394.
54. Yonaba, H., F. Anctil, and V. Fortin, Comparing sigmoid transfer functions for neural network multistep ahead streamflow forecasting. Journal of Hydrologic Engineering, 2010. 15(4): p. 275-283.
55. Attwell, D. and S.B. Laughlin, An Energy Budget for Signaling in the Grey Matter of the Brain. Journal of Cerebral Blood Flow & Metabolism, 2001. 21(10): p. 1133-1145.
56. Agarap, A.F., Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.
57. Willmott, C.J. and K. Matsuura, Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate research, 2005. 30(1): p. 79-82.
58. Bottou, L., Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010. 2010, Springer. p. 177-186.
59. Ruder, S., An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
60. Wilson, A.C., B. Recht, and M.I. Jordan, A lyapunov analysis of momentum methods in optimization. arXiv preprint arXiv:1611.02635, 2016.
61. Mukkamala, M.C. and M. Hein, Variants of rmsprop and adagrad with logarithmic regret bounds. arXiv preprint arXiv:1706.05507, 2017.
62. Kingma, D.P. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
63. Ioffe, S. and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
64. Srivastava, N., et al., Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 2014. 15(1): p. 1929-1958.
65. Kotsiantis, S., D. Kanellopoulos, and P. Pintelas, Data preprocessing for supervised leaning. International Journal of Computer Science, 2006. 1(2): p. 111-117.
66. Rodríguez, C.K., A computational environment for data preprocessing in supervised classification. 2004: University of Puerto Rico, Mayaguez (Puerto Rico).
67. Hansen, L.K. and P. Salamon, Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 1990. 12(10): p. 993-1001.
68. Salimans, T. and D.P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. in Advances in neural information processing systems. 2016.
69. Shaheen, H., S. Agarwal, and P. Ranjan. MinMaxScaler Binary PSO for Feature Selection. in First International Conference on Sustainable Technologies for Computational Intelligence. 2020. Springer.
70. Stathakis, D., How many hidden layers and nodes? International Journal of Remote Sensing, 2009. 30(8): p. 2133-2147.
71. Benesty, J., et al., Pearson correlation coefficient, in Noise reduction in speech processing. 2009, Springer. p. 1-4.
72. Browne, M.W., Cross-validation methods. Journal of mathematical psychology, 2000. 44(1): p. 108-132.
73. Yurkin, M.A., et al., Systematic comparison of the discrete dipole approximation and the finite difference time domain method for large dielectric scatterers. Optics Express, 2007. 15(26): p. 17902-17911.
74. Draine, B.T., The Discrete-Dipole Approximation and Its Application to Interstellar Graphite Grains. The Astrophysical Journal, 1988. 333: p. 848.
75. Draine, B.T. and P.J. Flatau, Discrete-dipole approximation for scattering calculations. Josa a, 1994. 11(4): p. 1491-1499.
76. Flatau, P. and B.T. Draine, Fast near field calculations in the discrete dipole approximation for regular rectilinear grids. Optics express, 2012. 20(2): p. 1247-1252.
77. Goodman, J.J., B.T. Draine, and P.J. Flatau, Application of fast-Fourier-transform techniques to the discrete-dipole approximation. Optics Letters, 1991. 16(15): p. 1198-1200.
78. Penttilä, A., et al., Comparison between discrete dipole implementations and exact techniques. Journal of Quantitative Spectroscopy and Radiative Transfer, 2007. 106(1-3): p. 417-436.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊