跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.227) 您好!臺灣時間:2026/05/14 15:05
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:胡程翔
研究生(外文):Cheng-Hsiang Hu
論文名稱:使用動態權重集成模型預測用電資料之時間序列
論文名稱(外文):Forecasting Time Series for Electricity Consumption Data Using Dynamic Weighted Ensemble Model
指導教授:陳怡伶陳怡伶引用關係
指導教授(外文):Yi-Ling Chen
口試委員:戴碧如陳玉芬
口試委員(外文):Bi-Ru DaiYu-Fen Chen
口試日期:2020-01-31
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:49
中文關鍵詞:電力附載預測資料探勘時間序列預測單變量集成模型
外文關鍵詞:Electricity Load ForecastingData MiningTime Series ForecastingUnivariateEnsemble Model
相關次數:
  • 被引用被引用:0
  • 點閱點閱:242
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
電力負荷預測是近年來非常受歡迎的研究課題。在本文中,我們僅使過往的電力數據(即不使用天氣信息或其他功能)來預測用電量。我們調查了現有的單變量方法,例如基於MLP,基於CNN,基於XGBoost,基於RF和EN3-bestK。然而,由於功率值的範圍變化很大,所以這些現有方法預測準確率不佳。此外,現有的集成方法在用電值範圍較大的數據集中效果不佳。因此,我們提出了一種電力消耗預測系統,稱為動態權重集成模型(DWEM)。DWEM分為三個階段。首先,我們在數據預處理中提供了三種類型的數據序列化。其次,我們訓練了四種類型的模型(即基於MLP,基於CNN,基於XGBoost和基於RF),以便稍後構建集成模型。最後,我們使用擬議的兩階段合奏將四種類型的模型合併為一個整體模型。在兩階段合奏中,第一階段是對使用相同算法但序列化不同的訓練模型進行合奏,第二階段是對來自不同算法的模型進行合奏。兩階段集成方法旨在根據相應模型的先前性能動態調整權重。此外,我們注意到處理缺失值是系統性能的重要因素。因此,我們提出一種統計方法來估計缺失值。我們將DWEM與各種最新方法進行了比較。結果表明,在MAPE和MAE指標上,DWEM分別比其他方法平均優於26.65\%和24.34\%。
Electricity load forecasting is a prevalent research topic in recent years. in this study, we predict the electricity consumption using only previous power data (i.e., without using weather information or other features). We survey existing univariate methods such as MLP-based, CNN-based, XGBoost-based, RF-based and EN3-bestK. However, these existing methods do not perform well due to that the range of power values varies a lot. Besides, the existing ensemble method is not effective in datasets with large ranges of power values. Therefore, we present an electricity consumption forecast system called Dynamic Weighted Ensemble Model (DWEM). There are three stages in the proposed DWEM. First of all, we provide three types of data serialization in data preprocessing. Second, we train four types of models (i.e., MLP-based, CNN-based, XGBoost-based, and RF-based) for building the ensemble model later. Finally, we combine the four types of models into an ensemble model, using the proposed Two-Phase Ensemble. In the two-phase ensemble, the first phase is to ensemble the models trained using the same algorithm but different serializations, and the second phase is to ensemble the models from different algorithms. The two-phase ensemble method is designed to dynamically adjust weights based on the previous performance of the corresponding models. Moreover, we notice that properly handling missing values is an important factor in system performance. Therefore, we present a statistical method to estimate the missing values. We compare DWEM with various state-of-the-art methods. The results show that DWEM is on average about 26.65\% and 24.34\% better than other methods on the MAPE and MAE indicators, respectively.
Abstract in Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Challenges and Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 The Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 Data Serialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Serialization1 (for day-to-hour prediction) . . . . . . . . . . . . . 9
3.1.2 Serialization2 (for hour-to-hour prediction) . . . . . . . . . . . . 10
3.1.3 Serialization3 (for day-to-day prediction) . . . . . . . . . . . . . 11
3.2 Base Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 MLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.2 Sel-CNN (Selected CNN) . . . . . . . . . . . . . . . . . . . . . 12
3.2.3 XGBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.4 Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Two Phase Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Missing Value Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Methods Used for Comparison . . . . . . . . . . . . . . . . . . . . . . . 21
4.4 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4.1 Australia dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4.2 Taiwan dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.5 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6.1 Comparison of different types of data serialization . . . . . . . . 26
4.6.2 Comparison of various ensembles with different types of data serializations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6.3 Comparison results in Australian dataset . . . . . . . . . . . . . . 28
4.6.4 Comparison results in Taiwanese dataset . . . . . . . . . . . . . 29
4.6.5 Comparison of running time in Australian and Taiwanese dataset . 30
4.6.6 Comparison of different methods for estimating missing values in
training data for DWEM . . . . . . . . . . . . . . . . . . . . . . 30
5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
[1] International Energy Agency(IEA) https://www.iea.org/
[2] Tianqi Chen and Carlos Guestrin. ``Xgboost: A scalable tree boosting system.'' InProceedings of the22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794.ACM, 2016.
[3] Dong C Park, MA El-Sharkawi, RJ Marks, LE Atlas, and MJ Damborg. ``Electric load forecasting usingan artificial neural network.'' IEEE transactions on Power Systems, 6(2):442–449, 1991.
[4] Filipe Rodrigues, Carlos Cardeira, and João Manuel Ferreira Calado. ``The daily and hourly energyconsumption and load forecasting using artificial neural network method: a case study using a set of 93households in portugal.Energy Procedia,'' 62:220–229, 2014.
[5] Irena Koprinska, Dengsong Wu, and Zheng Wang. ``Convolutional neural networks for energy timeseries forecasting. In2018 International Joint Conference on Neural Networks (IJCNN),'' pages 1–8.IEEE, 2018.
[6] Xiaoqun Liao, Nanlan Cao, Ma Li, and Xiaofan Kang. ``Research on short-term load forecasting usingxgboost based on similar days.'' In2019 International Conference on Intelligent Transportation, BigData \& Smart City (ICITBS), pages 675–678. IEEE, 201
[7] George AF Seber and Alan J Lee.Linear regression analysis, volume 329. John Wiley \& Sons, 2012
[8] Zheng Wang Irena Koprinska, Irena Koprinska, Alicia Troncoso, and Francisco Martínez-Álvarez. ``Static and dynamic ensembles of neural networks for solar power forecasting.'' In2018 InternationalJoint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2018.
[9] Australian Energy Market Operator (AEMO) http://www.aemo.com.au/
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ``Imagenet classification with deep convo-lutional neural networks.'' InAdvances in neural information processing systems, pages 1097–1105,2012.
[11] Matt W Gardner and SR Dorling. ``Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences.'' Atmospheric environment, 32(14-15):2627–2636, 1998.
[12] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. ``Speech recognition with deep recurrentneural networks.'' In2013 IEEE international conference on acoustics, speech and signal processing,pages 6645–6649. IEEE, 2013.
[13] Steve Lawrence, C Lee Giles, Ah Chung Tsoi, and Andrew D Back. ``Face recognition: A convolutionalneural-network approach.'' IEEE transactions on neural networks, 8(1):98–113, 1997
[14] Naveen Kumar Thokala, Aakanksha Bapna, and M Girish Chandra. ``A deployable electrical load fore-casting solution for commercial buildings.'' In2018 IEEE International Conference on Industrial Tech-nology (ICIT), pages 1101–1106. IEEE, 2018.
[15] HOCHREITER, Sepp; SCHMIDHUBER, Jrgen. Long short-term memory. Neural computation, 1997, 9.8: 1735-1780.
[16] Leo Breiman. ``Random forests.''Machine learning, 45(1):5–32, 2001
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top