|
參考文獻 一、中文部分 1.王建楠、李璧伊,細懸浮微粒暴露與心血管疾病:系統性回顧及整合分析,中華職業醫學雜誌,第21卷,第4期,第193-204頁,2014。 二、英文部分 1.Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386. 2.Dongare, A. D., Kharde, R. R., & Kachare, A. D. (2012). Introduction to artificial neural network. International Journal of Engineering and Innovative Technology (IJEIT), 2(1), 189-194. 3.Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536. 4.Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. 5.Hochreiter, S. (1998). The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02), 107-116. 6.Philipp, G., Song, D., & Carbonell, J. G. (2017). The exploding gradient problem demystified-definition, prevalence, impact, origin, tradeoffs, and solutions. arXiv preprint arXiv:1712.05577. 7.Polyak, B. T. (1964). Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5), 1-17. 8.Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence O (1/k^ 2). In Doklady an Ussr (Vol. 269, pp. 543-547). 9.Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159. 10.Hinton, G., Srivastava, N., & Swersky, K. (2012). Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on, 14(8). 11.Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 12.Dozat, T. (2016). Incorporating nesterov momentum into adam. 13.Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807-814). 14.Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9(1), 147-169. 15.Smolensky, P. (1986). Information processing in dynamical systems: Foundations of Harmony Theory (No. CU-CS-321-86). Colorado Univ at Boulder Dept of Computer Science. 16.Carreira-Perpinan, M. A., & Hinton, G. E. (2005, January). On contrastive divergence learning. In Aistats (Vol. 10, pp. 33-40). 17.Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507. 18.Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157-166. 19.LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. 20.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097-1105). 21.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 22.Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-9). 23.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770-778). 24.Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4700-4708). 25.Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7132-7141). 26.Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780. 27.Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. 28.Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. 29.Ordieres, J. B., Vergara, E. P., Capuz, R. S., & Salazar, R. E. (2005). Neural network prediction model for fine particulate matter (PM2.5) on the US–Mexico border in El Paso (Texas) and Ciudad Juárez (Chihuahua). Environmental Modelling & Software, 20(5), 547-559. 30.Hooyberghs, J., Mensink, C., Dumont, G., Fierens, F., & Brasseur, O. (2005). A neural network forecast for daily average PM10 concentrations in Belgium. Atmospheric Environment, 39(18), 3279-3289. 31.Mao, X., Shen, T., & Feng, X. (2017). Prediction of hourly ground-level PM2.5 concentrations 3 days in advance using neural networks with satellite data in eastern China. Atmospheric Pollution Research, 8(6), 1005-1015. 32.Li, T., Shen, H., Yuan, Q., Zhang, X., & Zhang, L. (2017). Estimating ground‐level PM2.5 by fusing satellite and station observations: a geo‐intelligent deep learning approach. Geophysical Research Letters, 44(23), 11-985. 33.Perez, P., & Menares, C. (2018). Forecasting of hourly PM2.5 in south-west zone in Santiago de Chile. Aerosol Air Qual. Res, 18, 2666-2679. 34.Tsai, Y. T., Zeng, Y. R., & Chang, Y. S. (2018, August). Air pollution forecasting using RNN with LSTM. In 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech) (pp. 1074-1079). IEEE. 35.Huang, C. J., & Kuo, P. H. (2018). A deep cnn-lstm model for particulate matter (PM2.5) forecasting in smart cities. Sensors, 18(7), 2220. 36.Qin, D., Yu, J., Zou, G., Yong, R., Zhao, Q., & Zhang, B. (2019). A novel combined prediction scheme based on CNN and LSTM for urban PM 2.5 concentration. IEEE Access, 7, 20050-20059. 37.Lee, S., & Shin, J. (2019). Hybrid Model of Convolutional LSTM and CNN to Predict Particulate Matter. International Journal of Information and Electronics Engineering, 9(1). 38.Wen, C., Liu, S., Yao, X., Peng, L., Li, X., Hu, Y., & Chi, T. (2019). A novel spatiotemporal convolutional long short-term neural network for air pollution prediction. Science of the Total Environment, 654, 1091-1099. 39.Qi, Y., Li, Q., Karimian, H., & Liu, D. (2019). A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory. Science of the Total Environment, 664, 1-10. 40.Kowalski, P. A., Sapała, K., & Warchałowski, W. (2020). PM10 forecasting through applying convolution neural network techniques. International Journal of Environmental Impacts, 3(1), 31-43. 41.Li, S., Xie, G., Ren, J., Guo, L., Yang, Y., & Xu, X. (2020). Urban PM2.5 concentration prediction via attention-based CNN-LSTM. Applied Sciences, 10, 1953. 42.Xayasouk, T., Lee, H., & Lee, G. (2020). Air Pollution Prediction Using Long Short-Term Memory (LSTM) and Deep Autoencoder (DAE) Models. Sustainability, 12(6), 2570. 43.Zhang, Q., Lam, J. C., Li, V. O., & Han, Y. (2020). Deep-AIR: A Hybrid CNN-LSTM Framework forFine-Grained Air Pollution Forecast. arXiv preprint arXiv:2001.11957. 44.Knapp, K. R. (2008). Scientific data stewardship of International Satellite Cloud Climatology Project B1 global geostationary observations. Journal of Applied Remote Sensing, 2(1), 023548. 45.Box, G. E., Jenkins, G. M., & Reinsel, G. C. (2011). Time Series Analysis: Forecasting and Control (Vol. 734). John Wiley & Sons. 46.Zhang, G. P., & Qi, M. (2005). Neural network forecasting for seasonal and trend time series. European Journal of Operational Research, 160(2), 501-51
|