跳到主要內容

臺灣博碩士論文加值系統

(2600:1f28:365:80b0:2119:b261:d24c:ce10) 您好!臺灣時間:2025/01/21 07:49
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:高子庭
研究生(外文):Tzu-Ting Kao
論文名稱(外文):Reducing forecasting error under hidden markov model by recurrent neural networks
指導教授:傅承德傅承德引用關係
指導教授(外文):Cheng-Der Fuh
學位類別:碩士
校院名稱:國立中央大學
系所名稱:統計研究所
學門:數學及統計學門
學類:統計學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:66
中文關鍵詞:人工類神經網絡遞迴類神經網絡隱馬可夫模型馬可夫轉換模型預測誤差監督式學習演算法
外文關鍵詞:artificial neural networksrecurrent neural networkshidden markov modelmarkov switching modelforecasting errorsupervised learning algorithm
相關次數:
  • 被引用被引用:0
  • 點閱點閱:166
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在最近幾年, 人工類神經網絡因為在各領域應用中高水平的成果表現已經成為最受歡迎的機器學習方法之一。所以我們想把類神經網絡跟傳統的統計模型做結合, 然後給出一種方法可以結合兩種方法的優勢。在這篇文章中我們有興趣的統計模型是隱馬爾可夫模型以及遞迴類神經網絡。因為我們可以證明在分類問題中遞迴類神經網絡的輸出會逼近一個後驗機率, 所以我們把這個機率放進隱馬可夫模型的演算法中來改善模型參數估計的精確度。這個使用遞迴類神經網絡的訓練演算法其中一個優勢就是將原本的演算法從非監督式變成監督式, 所以在這個新的演算法中我們可以將資料中類別的資訊加進來。在模擬以及真實資料的分析中, 這個新的演算法除了可以增加參數估計的精確度外, 還可以降低參數的標準誤。
In recent year, artificial neural networks became a very popular machine learning method since it’s high levels performance. So we want to combine neural networks and traditional statistical model and give the method which can catch the advantage of both method. Here the statistical model we are interested is the hidden markov
model, and the artificial neural networks we choose is recurrent neural networks. Since we have proved recurrent neural networks output can approximate a posterior
probability in classification task, so we put this probability into training process of hidden markov model to improve the accuracy of parameters estimator. The advantage of this algorithm is that we change the original training algorithm from unsupervised to supervised, so we can take the information about data level into training process. The simulation and real data analysis show that this combination training process can not only improve accuracy of parameter estimation and reduce standard error of parameter estimation.
Contents
摘要 i
Abstract ii
誌謝 iii
1 Introduction 1
2 Background 3
2.1 Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Recurrent Neural Networks (RNNs) . . . . . . . . . . . . . . . 3
2.1.2 Back Propagation Through Time . . . . . . . . . . . . . . . . 8
2.1.3 The challenge of long-term dependencies . . . . . . . . . . . . 11
2.1.4 Long-short term memory . . . . . . . . . . . . . . . . . . . . 12
2.2 Hidden Markov Model (HMM) . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Elements of HMMs . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 The Three Problems for HMMs . . . . . . . . . . . . . . . . . 22
2.2.3 Solutions of the three problems of HMMs . . . . . . . . . . . 24
3 Neural Networks in HMMs 30
3.1 Output of neural network on classification task . . . . . . . . . . . . 30
3.1.1 Classification and Bayesian probabilities . . . . . . . . . . . . 30
3.1.2 Cost function . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Discriminant HMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Multi-Layer Perceptron with sequential input . . . . . . . . . 35
3.3 Combine RNNs and HMM . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Simulation 40
4.1 Model and parameters setting . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Real data analysis 48
5.1 Overview of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Conclusion 51
References 53
Batzoglou, S., L. Pachter, J. P. Mesirov, B. Berger, and E. S. Lander (2000). Human
and mouse gene structure: comparative analysis and application to exon prediction.
Genome research 10(7), 950–958.
Bourlard, H. and N. Morgan (1993). Continuous speech recognition by connectionist
statistical methods. IEEE Transactions on Neural Networks 4(6), 893–909.
Bourlard, H. and C. J. Wellekens (1989). Links between markov models and multilayer
perceptrons. In Advances in neural information processing systems, pp.
502–510.
Cho, K., B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk,
and Y. Bengio (2014). Learning phrase representations using rnn encoder-decoder
for statistical machine translation. arXiv preprint arXiv:1406.1078.
Girshick, R. (2015). Fast r-cnn. arXiv preprint arXiv:1504.08083.
Graves, A., A.-r. Mohamed, and G. Hinton (2013). Speech recognition with deep
recurrent neural networks. In Acoustics, speech and signal processing (icassp),
2013 ieee international conference on, pp. 6645–6649. IEEE.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: Journal of the Econometric
Society, 357–384.
He, K., X. Zhang, S. Ren, and J. Sun (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern
recognition, pp. 770–778.
Mohanty, S. P., D. P. Hughes, and M. Salathé (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science 7, 1419.
Pedersen, J. S. and J. Hein (2003). Gene finding with a hidden markov model of
genome structure and evolution. Bioinformatics 19(2), 219–227.
Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications
in speech recognition. Proceedings of the IEEE 77(2), 257–286.
Rajpurkar, P., J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul,
C. Langlotz, K. Shpanskaya, et al. (2017). Chexnet: Radiologist-level pneumonia
detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.
Richard, M. D. and R. P. Lippmann (1991). Neural network classifiers estimate
bayesian a posteriori probabilities. Neural computation 3(4), 461–483.
Sinha, A., H. Namkoong, and J. Duchi (2017). Certifiable distributional robustness
with principled adversarial training. arXiv preprint arXiv:1710.10571.
Weng, T.-W., H. Zhang, P.-Y. Chen, J. Yi, D. Su, Y. Gao, C.-J. Hsieh, and L. Daniel
(2018). Evaluating the robustness of neural networks: An extreme value theory
approach. arXiv preprint arXiv:1801.10578.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top