|
1. Li, T., et al., A radiative cooling structural material. Science, 2019. 364(6442): p. 760-763. 2. Raman, A.P., et al., Passive radiative cooling below ambient air temperature under direct sunlight. Nature, 2014. 515(7528): p. 540-544. 3. Shi, N.N., et al., Nanostructured fibers as a versatile photonic platform: radiative cooling and waveguiding through transverse Anderson localization. Light: Science & Applications, 2018. 7(1): p. 1-9. 4. Zhai, Y., et al., Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling. Science, 2017. 355(6329): p. 1062. 5. Zhou, L., et al., A polydimethylsiloxane-coated metal structure for all-day radiative cooling. Nature Sustainability, 2019. 2(8): p. 718-724. 6. Shi, N.N., et al., Keeping cool: Enhanced optical reflection and radiative heat dissipation in Saharan silver ants. Science, 2015. 349(6245): p. 298-301. 7. Liu, Y., et al., Materials discovery and design using machine learning. Journal of Materiomics, 2017. 3(3): p. 159-177. 8. Schmidt, J., et al., Recent advances and applications of machine learning in solid-state materials science. npj Computational Materials, 2019. 5(1): p. 1-36. 9. Stein, H.S., et al., Machine learning of optical properties of materials–predicting spectra from images and images from spectra. Chemical science, 2019. 10(1): p. 47-55. 10. Pilania, G., et al., Accelerating materials property predictions using machine learning. Sci Rep, 2013. 3: p. 2810. 11. Bengio, Y., et al. Greedy layer-wise training of deep networks. in Advances in neural information processing systems. 2007. 12. Hinton, G.E. and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. science, 2006. 313(5786): p. 504-507. 13. Chen, M., et al., Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683, 2012. 14. Cho, K., Boltzmann machines and denoising autoencoders for image denoising. arXiv preprint arXiv:1301.3468, 2013. 15. Cho, K. Simple sparsification improves sparse denoising autoencoders in denoising highly corrupted images. in International Conference on Machine Learning. 2013. 16. Gondara, L. Medical image denoising using convolutional denoising autoencoders. in 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). 2016. IEEE. 17. Vincent, P., A connection between score matching and denoising autoencoders. Neural computation, 2011. 23(7): p. 1661-1674. 18. Vincent, P., et al. Extracting and composing robust features with denoising autoencoders. in Proceedings of the 25th international conference on Machine learning. 2008. 19. Vincent, P., et al., Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 2010. 11(12). 20. Xing, C., L. Ma, and X. Yang, Stacked denoise autoencoder based feature extraction and classification for hyperspectral images. Journal of Sensors, 2016. 2016. 21. Akcay, S., A. Atapour-Abarghouei, and T.P. Breckon. GANomaly: Semi-supervised Anomaly Detection via Adversarial Training. in Computer Vision – ACCV 2018. 2019. Cham: Springer International Publishing. 22. An, J. and S. Cho, Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, 2015. 2(1): p. 1-18. 23. Ribeiro, M., A.E. Lazzaretti, and H.S. Lopes, A study of deep convolutional auto-encoders for anomaly detection in videos. Pattern Recognition Letters, 2018. 105: p. 13-22. 24. Sakurada, M. and T. Yairi, Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction, in Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis. 2014, Association for Computing Machinery: Gold Coast, Australia QLD, Australia. p. 4–11. 25. Zhou, C. and R.C. Paffenroth, Anomaly Detection with Robust Deep Autoencoders, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017, Association for Computing Machinery: Halifax, NS, Canada. p. 665–674. 26. Lee, Y.-C., Y.-C. Tseng, and H.-L. Chen, Single type of nanocavity structure enhances light outcouplings from various two-dimensional materials by over 100-fold. ACS Photonics, 2017. 4(1): p. 93-105. 27. Wang, J., H. He, and D.V. Prokhorov, A folded neural network autoencoder for dimensionality reduction. Procedia Computer Science, 2012. 13: p. 120-127. 28. Wang, W., et al. Generalized autoencoder: A neural network framework for dimensionality reduction. in Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014. 29. Wang, Y., H. Yao, and S. Zhao, Auto-encoder based dimensionality reduction. Neurocomputing, 2016. 184: p. 232-242. 30. Zabalza, J., et al., Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing, 2016. 185: p. 1-10. 31. Tsai, H.-K. and M.W. Miles, Optical interference display panel. 2007, Google Patents. 32. Zou, X., L.J. Wang, and L. Mandel, Induced coherence and indistinguishability in optical interference. Physical review letters, 1991. 67(3): p. 318. 33. Jackson, R. and V. Zamlynny, Optimization of electrochemical infrared reflection absorption spectroscopy using Fresnel equations. Electrochimica acta, 2008. 53(23): p. 6768-6777. 34. Skaar, J., Fresnel equations and the refractive index of active media. Physical Review E, 2006. 73(2): p. 026605. 35. Kovalenko, S., Descartes-Snell law of refraction with absorption. Semiconductor Physics Quantum Electronics & Optoelectronics, 2001. 36. Walpita, L., Solutions for planar optical waveguide equations by selecting zero elements in a characteristic matrix. JOSA A, 1985. 2(4): p. 595-602. 37. Wöhler, H., et al., Characteristic matrix method for stratified anisotropic media: optical properties of special configurations. JOSA A, 1991. 8(3): p. 536-540. 38. Chýlek, P., et al., Scattering of electromagnetic waves by composite spherical particles: experiment and effective medium approximations. Applied Optics, 1988. 27(12): p. 2396-2404. 39. Stroud, D., The effective medium approximations: Some recent developments. Superlattices and microstructures, 1998. 23(3-4): p. 567-573. 40. Macleod, H.A., Thin-film optical filters. 2017: CRC press. 41. Markel, V.A., Introduction to the Maxwell Garnett approximation: tutorial. JOSA A, 2016. 33(7): p. 1244-1256. 42. Niklasson, G.A., C.G. Granqvist, and O. Hunderi, Effective medium models for the optical properties of inhomogeneous materials. Applied Optics, 1981. 20(1): p. 26-30. 43. Bengio, Y., Practical Recommendations for Gradient-Based Training of Deep Architectures, in Neural Networks: Tricks of the Trade: Second Edition, G. Montavon, G.B. Orr, and K.-R. Müller, Editors. 2012, Springer Berlin Heidelberg: Berlin, Heidelberg. p. 437-478. 44. Curry, B. and D.E. Rumelhart, MSnet: A Neural Network which Classifies Mass Spectra. Tetrahedron Computer Methodology, 1990. 3(3): p. 213-237. 45. Folkes, S.R., O. Lahav, and S.J. Maddox, An artificial neural network approach to the classification of galaxy spectra. Monthly Notices of the Royal Astronomical Society, 1996. 283: p. 651. 46. Glorot, X., A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. in Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011. 47. Kamath, A., et al., Neural networks vs Gaussian process regression for representing potential energy surfaces: A comparative study of fit quality and vibrational spectrum accuracy. The Journal of chemical physics, 2018. 148(24): p. 241702. 48. Kobayashi, R., et al., Neural network potential for Al-Mg-Si alloys. Physical Review Materials, 2017. 1(5): p. 053604. 49. Lee, S.C. and S.W. Han, Neural-network-based models for generating artificial earthquakes and response spectra. Computers & structures, 2002. 80(20-21): p. 1627-1638. 50. Park, W.B., et al., Classification of crystal structure using a convolutional neural network. IUCrJ, 2017. 4(4): p. 486-494. 51. Tanabe, K., T. Tamura, and H. Uesaka, Neural Network System for the Identification of Infrared Spectra. Applied Spectroscopy, 1992. 46(5): p. 807-810. 52. Hecht-Nielsen, R., Theory of the backpropagation neural network, in Neural networks for perception. 1992, Elsevier. p. 65-93. 53. Ito, Y., Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Networks, 1991. 4(3): p. 385-394. 54. Yonaba, H., F. Anctil, and V. Fortin, Comparing sigmoid transfer functions for neural network multistep ahead streamflow forecasting. Journal of Hydrologic Engineering, 2010. 15(4): p. 275-283. 55. Attwell, D. and S.B. Laughlin, An Energy Budget for Signaling in the Grey Matter of the Brain. Journal of Cerebral Blood Flow & Metabolism, 2001. 21(10): p. 1133-1145. 56. Agarap, A.F., Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018. 57. Willmott, C.J. and K. Matsuura, Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate research, 2005. 30(1): p. 79-82. 58. Bottou, L., Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010. 2010, Springer. p. 177-186. 59. Ruder, S., An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016. 60. Wilson, A.C., B. Recht, and M.I. Jordan, A lyapunov analysis of momentum methods in optimization. arXiv preprint arXiv:1611.02635, 2016. 61. Mukkamala, M.C. and M. Hein, Variants of rmsprop and adagrad with logarithmic regret bounds. arXiv preprint arXiv:1706.05507, 2017. 62. Kingma, D.P. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 63. Ioffe, S. and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 64. Srivastava, N., et al., Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 2014. 15(1): p. 1929-1958. 65. Kotsiantis, S., D. Kanellopoulos, and P. Pintelas, Data preprocessing for supervised leaning. International Journal of Computer Science, 2006. 1(2): p. 111-117. 66. Rodríguez, C.K., A computational environment for data preprocessing in supervised classification. 2004: University of Puerto Rico, Mayaguez (Puerto Rico). 67. Hansen, L.K. and P. Salamon, Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 1990. 12(10): p. 993-1001. 68. Salimans, T. and D.P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. in Advances in neural information processing systems. 2016. 69. Shaheen, H., S. Agarwal, and P. Ranjan. MinMaxScaler Binary PSO for Feature Selection. in First International Conference on Sustainable Technologies for Computational Intelligence. 2020. Springer. 70. Stathakis, D., How many hidden layers and nodes? International Journal of Remote Sensing, 2009. 30(8): p. 2133-2147. 71. Benesty, J., et al., Pearson correlation coefficient, in Noise reduction in speech processing. 2009, Springer. p. 1-4. 72. Browne, M.W., Cross-validation methods. Journal of mathematical psychology, 2000. 44(1): p. 108-132. 73. Yurkin, M.A., et al., Systematic comparison of the discrete dipole approximation and the finite difference time domain method for large dielectric scatterers. Optics Express, 2007. 15(26): p. 17902-17911. 74. Draine, B.T., The Discrete-Dipole Approximation and Its Application to Interstellar Graphite Grains. The Astrophysical Journal, 1988. 333: p. 848. 75. Draine, B.T. and P.J. Flatau, Discrete-dipole approximation for scattering calculations. Josa a, 1994. 11(4): p. 1491-1499. 76. Flatau, P. and B.T. Draine, Fast near field calculations in the discrete dipole approximation for regular rectilinear grids. Optics express, 2012. 20(2): p. 1247-1252. 77. Goodman, J.J., B.T. Draine, and P.J. Flatau, Application of fast-Fourier-transform techniques to the discrete-dipole approximation. Optics Letters, 1991. 16(15): p. 1198-1200. 78. Penttilä, A., et al., Comparison between discrete dipole implementations and exact techniques. Journal of Quantitative Spectroscopy and Radiative Transfer, 2007. 106(1-3): p. 417-436.
|