(3.236.175.108) 您好!臺灣時間:2021/02/27 06:43
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:馮淼聖
研究生(外文):Maio-Sheng feng
論文名稱:在模糊類神經網路中遞迴最小平方法使用之研究
論文名稱(外文):Study on Using Recursive Least Squares in Neural Fuzzy System
指導教授:蘇順豐
指導教授(外文):Shun-Feng Su
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:66
中文關鍵詞:遞迴最小平方法協方插矩陣
外文關鍵詞:Recursive least squaresCovariance marix
相關次數:
  • 被引用被引用:0
  • 點閱點閱:503
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
模糊類神經網路的學習演算法一般是使用最陡降坡法或稱之為倒傳遞學習演算法。然而,倒傳遞演算法最大的問題就在於它的收斂速度慢而且可能會落入區域最佳值。由於TSK模糊模型的使用,其後件部是一條線性的方程式,可以使用遞迴最小平方法做參數鑑別。最近,遞迴最小平方法被廣泛的使用在類神經網路以及模糊類神經網路。然而,在使用遞迴最小平方法上仍然存在許多的問題,例如:過度學習、處理時變參數的能力,以及大量的矩陣運算。在本研究裡,我們企圖去找的一個好的遞迴最小平方法變形,可以使用在SONFIN上做參數鑑別。在本論文裡,提出讓每一條模糊規則有自己獨立的協方差矩陣,並且使用一個可變的遺忘因子去處理系統時變的特性。更進一步,當訓練的資料有雜訊或大誤差存在時,在遞迴最小平方法中引入M-estimation的觀念去抑制大誤差的影響。在模擬的結果顯示,提出的這個方法的確加速的學習速度以及解決在使用遞迴最小平方法上的問題。除此之外,可變的遺忘因子不僅可以洽當處理時變系統,而且不會造成協方差矩陣爆炸的現象。

Generally, the learning algorithm used for neural fuzzy networks is a steepest descent like learning algorithm or the so-called backpropagation (BP) learning algorithm. However, the BP algorithm suffers from the problems of slow convergence and possibly being trapped in local optima. Due to the use of TSK fuzzy models, in which the consequent part is a linear function, the recursive least square (RLS) algorithm can be used in parameter identification. Recently, recursive least squares had been widely used for neural networks and neural fuzzy systems. However, there still exist various problems in the use of RLS algorithms, such as overfitting, incapable of handling time-varying parameters, huge matrix computation if rule or input numbers increase, etc. In this study, we attempt to find a good modification for RLS that can be used for parameter learning in SONFIN. In this thesis, we propose to let each rule has its individual covariance matrix and ignore covariance terms between rules and use a variable forgetting factor to deal with time varying property of the system. Furthermore, when noises or outliers exist in training data, the M-estimation concept is embedded into the RLS algorithm to restrain the outliers’ effects. In simulation results, the proposed RLS can indeed speed up the learning speed and solve the problems in using RLS. It can be found that the proposed variable forgetting factor can not only handle time varying systems properly and but also will not introduce the covariance explosion phenomenon.

Chapter 1 Introduction…………………………………………………1
Chapter 2 General Description Of Least Squares………………….…5
2-1 Introduction……………………………………………………………………..5
2-2 Standard Recursive Least Squares……………………………………………...6
2-3 Recursive Least Squares with Forgetting Factor……………………………….8
2-4 Robust Least Squares…………………………………………………………...9
2-5 Weight Decay Least Squares…………………………………………………..10
Chapter 3 General Description of The Neural Fuzzy Systems……...14
3-1 Intriduction……………………………………………………………………14
3-2 ANFIS:Adaptive-Neteork-Based Fuzzy Inference System…………………...15
3-2-1 Structure of The ANFIS…………………………………………………..15
3-2-2 Learning Algorithm for The ANFIS……………………………………...17
3-3 Neural Fuzzy System with Structure Learning………………………………..17
3-3-1 Structure of The SONFIN………………………………………………...18
3-3-2 Learning Algorithm for The SONFIN……………………………………19
Chapter 4 Some Problems and The Used Examples…………………………………23
4-1 Introduction……………………………………………………………………23
4-2 The Dimensionality Problem of Covariance Matrix…………………………..24
4-3 Variable Forgetting Factor……………………………………………………..25
4-4 The used examples…………………………………………………………….26
Chapter 5 Study on Structure Learning for SONFIN……………….31
5-1 Introduction……………………………………………………………………31
5-2 The modified RLS for SONFIN……………………………………………….32
5-2-1 Variable forgetting factor…………………………………………………33
5-2-2 Matrix computational complexity………………………………………...37
5-3 The Hybrid Learning…………………………………………………………..45
5-4 Comparison of The Performance in The Learning phase……………………..47
5-4-1 The Learning Performance of The Two-Variable Sinc Function…………47
5-4-2 The Learning Performance of SISO System……………………………..49
5-5 The Comparison in Generalization Capability………………………………...51
5-5-1 Generalization Capability of the Sinc Function…………………………..51
5-5-2 Generalization Capability of the SISO System…………………………..53
5-6 Weight decay recursive least square…………………………………………..54
5-7 Robust recursive least squares………………………………………………...56
Chapter 6 Conclusions…………………………………………………..59

[1] S. Horikawa, T. Furuhashi, and Y. Uchikawa, “On fuzzy modeling using fuzzy neural networks with the backpropagation algorithm,” IEEE Trans. Neural Networks, vol. 3, pp. 801—806, Sept. 1992.
[2] K. Tanaka, M. Sano, and H. Watanabe, “Modeling and control of carbon monoxide concentration using a neuro-fuzzy technique,” IEEE Trans. Fuzzy Syst., vol. 3, pp. 271—279, Aug. 1995.
[3] Y. Lin and G. A. Cunningham, “A new approach to fuzzy-neural system modeling,” IEEE Trans. Fuzzy Syst., vol. 3, pp. 190—197, May 1995.
[4] J. S. Jang, “ANFIS: adaptive-network-based fuzzy inference system Systems,” IEEE Trans. Syst., Man, Cybern., vol. 23 pp. 665-685, 1993.
[5] C. F. Juang and C. T. Lin, “An On-line self-constructing neural fuzzy inference netwok and its applications,” IEEE Trans. Fuzzy Syst., vol. 6 no. 1, pp. 13-32, 1998.
[6] S. Haykin, “Adaptive Filter Theory”, 3rd Ed., Prentice Hall, New York, 1991.
[7] Y. Zou, S.C. Chan, and T. S. Ng, “Robust M-estimate adaptive filtering,” Vision, Image and Signal Processing, IEE Proceedings, vol. 148, issue. 4, pp. 289-294, 2001.
[8] C. S. Leung, G. H. Young, J. Sum, and W. Kan, “On the regularization of forgetting recursive least square,” IEEE Trans. Neural Networks, vol. 10, no. 6, pp. 1482-1486, 1999.
[9] Y. Z. and X. R. Li, “A Fast U-D Factorization-Based Learning Algorithm with Applications to Nonlinear System Modeling and Identification,” IEEE Trans. Neural Networks, vol. 10, no. 4, pp. 930-938, 1999.
[10] J. S. R Jang、C.-T. Sun and E. Mizutani, Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice-Hall, 1997.
[11] C. S. Leung, A. C. Tsoi, and L. W. Chan, “Two regularizers for recursive least squared algorithms in feedforward multilayer neural networks,” IEEE Trans. Neural Networks, vol. 12, no. 6, pp. 1314-1332, 2001.
[12] C. C. Chuang, S. F. Su, and C. C. Hsiao, “The annealing robust backpropagation (ARBP) learning algorithm,” IEEE Trans. Neural Networks, vol. 11, no. 5, pp. 1067-1077, 2000.
[13] J.-Q. Chen and Y.-G. Xi, “Nonlinear system modeling by competitive learning and adaptive fuzzy inference system,” IEEE Trans. Syst., Man, Cybern., vol. 28, no. 2, pp. 231-238, 1998.
[14] M. M. Chansarkar and U. B. Desai, “A robust least squares algorithm,” IEEE Trans. Signal Processing, vol. 45, no. 7, pp. 1726-1735, 1997.
[15] E. Kim, M. Park, S. Ji, and M. Park, “A new approach to fuzzy modeling,” IEEE Trans. Fuzzy Systems, vol. 5, no. 3, pp. 328-337, 1997.
[16] E. Kim, M. Park, S. Kim, and M. Park, “A transformed Iinput-domain approach to fuzzy modeling,” IEEE Trans. Fuzzy Systems, vol. 6, no. 4, pp. 596-604, 1998.
[17] C. C. Wong and C. C. Chen, “A GA-based method for constructing fuzzy systems directly from numerical data,” IEEE Trans. Syst., Man, Cybern., vol. 30, no. 6, pp. 904-911, 2000.
[18] Hongru Li, Xiaozhe Wang, and Shusheng Gu, “An improves recusrive predicition error algorithm for training recurent neural networks,” Proceedings of the 3rd World Congress on Intelligence Control and Automation, pp. 1043-1046, June 28-July 2, 2000.
[19] J. Espinosa and J. Vandewalle, “Constructing fuzzy models with linguistic integrity from numerical data-AFRELI algorithm,” IEEE Trans. Fuzzy Systems, vol. 8, np. 5, pp. 591-600, 2000.
[20] A. Carini and E. Mumolo, “A numerically stable fast RLS algorithm for adaptive filtering and prediction based on the UD factorization,” IEEE Transactions on Signal Processing, vol. 47, no. 8, pp. 2309-2313, 1999.
[21] N. S. Rubanov, “The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network,” IEEE Trans. Neuarl Networks, vol. 11, no. 2, pp. 295-305, 2000.
[22] Y. S. Cho, S. B. Kim, and E. J. Powers, “Time-varying spectral estimation using AR models with variable forgetting factors,” IEEE Trans. Signal Processing, vol. 39, no. 6, pp. 1422-1426, 1991.
[23] C. T. Lin, C. F. Juang, and C. P. Li, “Temperature control with a neural fuzzy inference network,” IEEE Trans. Syst., Man, Cybern., vol. 29, no. 3, pp. 440-451, 1999.
[24] T.-K. Woo, “HRLS: A more efficient RLS algorithm for adaptive FIR filtering,” IEEE Communications Letters, vol. 5, no. 3, pp. 81-84, 2001.
[25] S. Shan, F. Palmieri, and M. Datum, “Optimal filtering algorithms for fast learning in feedforward neural network,” Neural Networks, vol. 5, no. 5, pp. 779-787, 1992.
[26] S. Shar and F. Palmieri, “MEKA-a fast, local algorithm for training feedforward neural networks,” in Proc. Int. Joint Conf. Neural Networks, pp. III 41-46, 1990.
[27] S. Singhal and L. Wu, “Training multilayer perceptrons with the extended kalman algorithm,” Advances in Neural Information Processing Systems I, in D. S. Touretzky, Ed. San Mateo, CA: Morgan Kaufmann, pp. 133-1450, 1989.
[28] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its applications to modeling and control,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 15, no. 1, pp. 116-132, 1985.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔