跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.170) 您好!臺灣時間:2024/12/06 03:54
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:趙守恩
研究生(外文):CHAO, SHOU-EN
論文名稱:以自我校正式神經網路解釋外部因子對 PM2.5 預測的影響
論文名稱(外文):Interpretation of External Factors’ Impact on PM2.5 Forecasting by Self-Calibrated Neural Network
指導教授:戴榮賦戴榮賦引用關係尹邦嚴尹邦嚴引用關係
指導教授(外文):DAY, RONG-FUHYIN, PENG-YENG
口試委員:陳同孝林妙聰
口試委員(外文):CHEN, TUNG-SHOULIN, MIAO-TSUNG
口試日期:2020-07-22
學位類別:碩士
校院名稱:國立暨南國際大學
系所名稱:資訊管理學系
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:87
中文關鍵詞:細懸浮微粒人工神經網路時間序列分析多來源資料因子分析
外文關鍵詞:PM2.5Artificial Neural NetworkTime-Series AnalysisMulti-Source DataFactor Analysis
相關次數:
  • 被引用被引用:1
  • 點閱點閱:322
  • 評分評分:
  • 下載下載:36
  • 收藏至我的研究室書目清單書目收藏:0
近年來,人們愈來愈重視空氣品質的問題,而細懸浮微粒(PM2.5)濃度是人們在評估空氣品質時所參考的指標之一。本研究分析了台灣中部空品區11個測站歷年的空氣品質資料,結果顯示PM2.5濃度的變化存在著時間相關性,並且測站之間有著空間相關性。本研究提出一個可以整合多種不同型態資料的類神經網路模型,結合時間序列模型與校正器建立預測未來24小時PM2.5濃度的模型。校正器可以根據外部新的資料而對原先的預測值進行調整,本研究校正器的輸入值為預測時刻前一日的PM2.5濃度、天氣的時間序列資料以及預測時刻前一小時的衛星雲圖,校正器以時間序列模型預測值與PM2.5濃度真實值的差為目標值。訓練模型時會依據時間序列模型的結果訓練校正器,這避免類神經網路因為網路過深而導致模型無法學習(權重無法有效更新)的問題,並且可以透過觀察校正器輸入值與輸出值的關係分析外部因子的影響。測試結果顯示校正器可以降低模型整體的預測誤差,並且可以透過校正器觀察到一些天氣狀況對於PM2.5濃度的影響。
In recent years, air quality is an issue with which people are much concerned. The particulate matter is one of the indicators for people to evaluate air quality. We analyze the historical air quality data of the 11 monitoring supersites in the central Taiwan air quality district. We find that there exist temporal and spatial correlation between the concentrations of PM2.5 at the supersites. In this paper we build a neural network model which can integrate several different types of data. This model combines the ime-series model and the calibration model to forecast the concentration of PM2.5 in the next 24 hours. The calibrator can adjust the original forecast value according to the external factors. We use PM2.5 concentration and meteorological data as the input value of one calibrator, and use satellite images as the input value of another calibrator. The calibrators consider the difference between the forecast value of the time-series model and actual value as the target value. Both calibrators are trained according to the outputs of the time-series model. This avoids the problem when the weights of neural network model cannot be updated effectively. Moreover, we can observe the input and output values of calibrators to analyze the impacts of external factors. The test result shows that calibrators can reduce the overall forecast error of the model, and we can observe the impact of some meteorological conditions on the concentration of PM2.5.
目次
摘要 i
Abstract ii
目次 iv
圖目次 vii
表目次 x
第一章 緒論 1
第二章 文獻探討 3
2.1 人工神經網路 3
2.1.1 感知器神經網路 3
2.1.2 神經網路學習優化 4
2.1.3 梯度消失與梯度爆炸 8
2.1.4 深度學習 9
2.1.4.1 CNN 10
2.1.4.2 LeNet 11
2.1.4.3 AlexNet 11
2.1.4.4 VGG-Net 12
2.1.4.5 GoogLeNet 12
2.1.4.6 ResNet 13
2.1.4.7 DenseNet 13
2.1.4.8 SENet 13
2.1.4.9 遞迴神經網路 14
2.1.4.10 Long Short-Term Memory (LSTM) 15
2.1.4.11 Bidirectional RNN 16
2.2 人工神經網路於PM2.5預測問題上的應用 17
第三章 研究方法 23
3.1 使用資料 23
3.2 使用工具 23
3.3 測站間之空間關係 23
3.4 時間序列分析 25
3.5 資料前處理 28
3.6 正規化 29
3.7 自我校正模型 29
3.7.1 初始模型 30
3.7.2 LSTM 校正器 31
3.7.3 CNN 校正器 32
3.8 外部因子影響分析 34
3.8.1 LSTM校正器 34
3.8.2 CNN校正器 34
第四章 實驗結果 35
4.1 初始模型變數選擇 35
4.2 時間序列模型選擇 36
4.3 時間序列模型輸入值時間步長與疊代次數 37
4.4 校正器參數 38
4.5 測試 40
4.6 外部因子影響分析 44
第五章 結論 74
參考文獻 75
一、中文部分 75
二、英文部分 75
附錄 81
附錄一 2016-2018颱風時期紅外線衛星雲圖 81

圖目次
圖1 感知器示意圖 3
圖2 多層感知器示意圖 4
圖3 誤差反向傳播 7
圖4 卷積計算示意圖 10
圖5 MAX POOLING示意圖 10
圖6 RNN 示意圖 14
圖7 BASIC RNN CELL示意圖 15
圖8 LSTM CELL示意圖 16
圖9 BRNN示意圖 17
圖10 中部空品區各測站位置 24
圖11 PM2.5自回歸分析折線圖(2016-2018) 26
圖12 PM2.5月均值(2016-2018) 26
圖13 時間序列分解示意圖 27
圖14 風向轉換示意圖 28
圖15 自我校正模型流程圖 30
圖16 時間序列模型 31
圖17 時間序列校正器流程圖 32
圖18 影像資料校正器流程圖 33
圖19 雙向LSTM各疊代次數模型表現 37
圖20 時間序列模型各參數模型表現 37
圖21 LSTM校正器各疊代次數模型表現 39
圖22 CNN 校正器各疊代次數表現 39
圖23 整體校正器效能 40
圖24 測試集殘差分布與校正器輸出 45
圖25 細懸浮微粒年均值(2016-2019) 46
圖26 第一組時間序列模型訓練集殘差分布圖 46
圖27 第二組時間序列模型訓練集殘差分布圖 47
圖28 第三組時間序列模型訓練集殘差分布圖 47
圖29 第四組時間序列模型訓練集殘差分布圖 48
圖30 風向分析 49
圖31 無方向因子敏感度分析(一) 50
圖32 無方向因子敏感度分析(二) 51
圖33 無方向因子敏感度分析(三) 52
圖34 埔里站大雨期間細懸浮微粒折線圖 53
圖35 大里站大雨期間細懸浮微粒折線圖 53
圖36 二林站大雨期間細懸浮微粒折線圖 54
圖37 竹山站大雨期間細懸浮微粒折線圖 54
圖38 西屯站大雨期間細懸浮微粒折線圖 55
圖39 沙鹿站大雨期間細懸浮微粒折線圖 55
圖40 忠明站大雨期間細懸浮微粒折線圖 56
圖41 南投站大雨期間細懸浮微粒折線圖 56
圖42 颱風警報期間校正器訓練集表現 57
圖43 尼伯特颱風時期預測折線圖(2016/07/06 - 2016/07/10) 60
圖44 莫蘭蒂颱風時期預測折線圖(2016/09/12 - 2016/09/16) 61
圖45 馬勒卡颱風時期預測折線圖(2016/09/15 - 2016/09/19) 62
圖46 梅姬颱風時期預測折線圖(2016/09/25 - 2016/09/29) 63
圖47 艾利颱風時期預測折線圖(2016/10/05 - 2016/10/07) 64
圖48 尼莎颱風時期預測折線圖(2017/07/28 - 2017/07/31) 65
圖49 海棠颱風時期預測折線圖(2017/07/29 - 2017/08/01) 66
圖50 天鴿颱風時期預測折線圖(2017/08/20 - 2017/08/23) 67
圖51 谷超颱風時期預測折線圖(2017/09/06 - 2017/09/08) 68
圖52 泰利颱風時期預測折線圖(2017/09/12 - 2017/09/15) 69
圖53 瑪莉亞颱風時期預測折線圖(2018/07/09 - 2018/07/12) 70
圖54 山竹颱風時期預測折線圖(2018/09/14 - 2018/09/16) 71
圖55 颱風期間風的累積狀態(一) 72
圖56 颱風期間風的累積狀態(二) 73
附圖 1尼伯特颱風 81
附圖2 莫蘭蒂颱風 82
附圖3 馬勒卡颱風 82
附圖4 梅姬颱風 83
附圖5 艾利颱風 83
附圖6 尼莎颱風與海棠颱風 84
附圖7 天鴿颱風 85
附圖8 谷超颱風 85
附圖9 泰利颱風 86
附圖10 瑪莉亞颱風 86
附圖11 山竹颱風 87

表目次
表1 各測站間PM2.5相關度(2016-2018) 24
表2 變數組合 36
表3 各變數組合之模型表現 36
表4 時間序列模型驗證集表現 38
表5 模型測試集表現(一) 41
表6 模型測試集表現(二) 42
表7 模型測試集表現(三) 43


參考文獻
一、中文部分
1.王建楠、李璧伊,細懸浮微粒暴露與心血管疾病:系統性回顧及整合分析,中華職業醫學雜誌,第21卷,第4期,第193-204頁,2014。
二、英文部分
1.Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386.
2.Dongare, A. D., Kharde, R. R., & Kachare, A. D. (2012). Introduction to artificial neural network. International Journal of Engineering and Innovative Technology (IJEIT), 2(1), 189-194.
3.Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536.
4.Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.
5.Hochreiter, S. (1998). The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02), 107-116.
6.Philipp, G., Song, D., & Carbonell, J. G. (2017). The exploding gradient problem demystified-definition, prevalence, impact, origin, tradeoffs, and solutions. arXiv preprint arXiv:1712.05577.
7.Polyak, B. T. (1964). Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5), 1-17.
8.Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence O (1/k^ 2). In Doklady an Ussr (Vol. 269, pp. 543-547).
9.Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159.
10.Hinton, G., Srivastava, N., & Swersky, K. (2012). Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on, 14(8).
11.Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
12.Dozat, T. (2016). Incorporating nesterov momentum into adam.
13.Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807-814).
14.Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9(1), 147-169.
15.Smolensky, P. (1986). Information processing in dynamical systems: Foundations of Harmony Theory (No. CU-CS-321-86). Colorado Univ at Boulder Dept of Computer Science.
16.Carreira-Perpinan, M. A., & Hinton, G. E. (2005, January). On contrastive divergence learning. In Aistats (Vol. 10, pp. 33-40).
17.Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507.
18.Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157-166.
19.LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
20.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097-1105).
21.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
22.Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-9).
23.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770-778).
24.Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4700-4708).
25.Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7132-7141).
26.Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.
27.Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
28.Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681.
29.Ordieres, J. B., Vergara, E. P., Capuz, R. S., & Salazar, R. E. (2005). Neural network prediction model for fine particulate matter (PM2.5) on the US–Mexico border in El Paso (Texas) and Ciudad Juárez (Chihuahua). Environmental Modelling & Software, 20(5), 547-559.
30.Hooyberghs, J., Mensink, C., Dumont, G., Fierens, F., & Brasseur, O. (2005). A neural network forecast for daily average PM10 concentrations in Belgium. Atmospheric Environment, 39(18), 3279-3289.
31.Mao, X., Shen, T., & Feng, X. (2017). Prediction of hourly ground-level PM2.5 concentrations 3 days in advance using neural networks with satellite data in eastern China. Atmospheric Pollution Research, 8(6), 1005-1015.
32.Li, T., Shen, H., Yuan, Q., Zhang, X., & Zhang, L. (2017). Estimating ground‐level PM2.5 by fusing satellite and station observations: a geo‐intelligent deep learning approach. Geophysical Research Letters, 44(23), 11-985.
33.Perez, P., & Menares, C. (2018). Forecasting of hourly PM2.5 in south-west zone in Santiago de Chile. Aerosol Air Qual. Res, 18, 2666-2679.
34.Tsai, Y. T., Zeng, Y. R., & Chang, Y. S. (2018, August). Air pollution forecasting using RNN with LSTM. In 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech) (pp. 1074-1079). IEEE.
35.Huang, C. J., & Kuo, P. H. (2018). A deep cnn-lstm model for particulate matter (PM2.5) forecasting in smart cities. Sensors, 18(7), 2220.
36.Qin, D., Yu, J., Zou, G., Yong, R., Zhao, Q., & Zhang, B. (2019). A novel combined prediction scheme based on CNN and LSTM for urban PM 2.5 concentration. IEEE Access, 7, 20050-20059.
37.Lee, S., & Shin, J. (2019). Hybrid Model of Convolutional LSTM and CNN to Predict Particulate Matter. International Journal of Information and Electronics Engineering, 9(1).
38.Wen, C., Liu, S., Yao, X., Peng, L., Li, X., Hu, Y., & Chi, T. (2019). A novel spatiotemporal convolutional long short-term neural network for air pollution prediction. Science of the Total Environment, 654, 1091-1099.
39.Qi, Y., Li, Q., Karimian, H., & Liu, D. (2019). A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory. Science of the Total Environment, 664, 1-10.
40.Kowalski, P. A., Sapała, K., & Warchałowski, W. (2020). PM10 forecasting through applying convolution neural network techniques. International Journal of Environmental Impacts, 3(1), 31-43.
41.Li, S., Xie, G., Ren, J., Guo, L., Yang, Y., & Xu, X. (2020). Urban PM2.5 concentration prediction via attention-based CNN-LSTM. Applied Sciences, 10, 1953.
42.Xayasouk, T., Lee, H., & Lee, G. (2020). Air Pollution Prediction Using Long Short-Term Memory (LSTM) and Deep Autoencoder (DAE) Models. Sustainability, 12(6), 2570.
43.Zhang, Q., Lam, J. C., Li, V. O., & Han, Y. (2020). Deep-AIR: A Hybrid CNN-LSTM Framework forFine-Grained Air Pollution Forecast. arXiv preprint arXiv:2001.11957.
44.Knapp, K. R. (2008). Scientific data stewardship of International Satellite Cloud Climatology Project B1 global geostationary observations. Journal of Applied Remote Sensing, 2(1), 023548.
45.Box, G. E., Jenkins, G. M., & Reinsel, G. C. (2011). Time Series Analysis: Forecasting and Control (Vol. 734). John Wiley & Sons.
46.Zhang, G. P., & Qi, M. (2005). Neural network forecasting for seasonal and trend time series. European Journal of Operational Research, 160(2), 501-51

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊