跳到主要內容

臺灣博碩士論文加值系統

(18.204.48.64) 您好!臺灣時間:2021/08/03 11:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳奕中
研究生(外文):Yi-Chung Chen
論文名稱:具穩定學習演算法之Hammerstein-Wiener型遞迴類神經網路於未知動態系統鑑別的研究
論文名稱(外文):A Study on a Hammerstein-Wiener Recurrent Neural Network with a Stable Learning Algorithm for Unknown Dynamic System Identification
指導教授:王振興王振興引用關係
指導教授(外文):Jeen-Shing Wang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2008
畢業學年度:96
語文別:英文
論文頁數:62
中文關鍵詞:系統鑑別遞迴類神經網路
外文關鍵詞:System IdentificationRecurrent Neural Network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:96
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出了一個基於Hammerstein-Wiener模式所發展出一種新型遞迴類神經網路,其結合了相關具系統化的穩定學習演算法來對動態系統鑑別。此一類神經網路包含了三個子系統,其中線性動態子系統位於兩個非線性靜態子系統中間。此一新穎的類神經網路包含了下列幾項優點: 1) 三個子系統可以將系統輸出表示成一個經由非線性轉換之線性狀態空間表示的網路; 2) 可利用發展成熟且完整的線性系統理論來分析網路的特性。此外,為了有效利用未知動態系統的輸入輸出資料來加以鑑別,我們發展了一套包含參數初始化及穩定參數學習方法的系統化鑑別演算法。其中,頻域特徵系統識別運算(FDERA)被用來判定系統大小以及估測最佳的初始化參數。另外,為了改善整個鑑別的效能,我們利用以導數法(ordered derivatives)為基礎的遞迴參數學習方法來獲得最佳化的網路。但由於動態系統本身可能會有不穩定的狀況,我們將找出網路穩定的條件,並利用這些條件將原本的遞迴參數學習方法延伸成為遞迴穩定參數學習方法。最後,經由電腦模擬及其他與文獻中現有的比較,我們驗證本論文所提出的網路及其演算法具有以下特點:1) 此參數初始演算法可獲得比隨機初始演算法更好的結果;2) 此遞迴穩定參數學習方法可以確保網路在系統學習期間與學習後都能保持穩定;3) 此網路與其鑑別演算法能在訓練過程確保網路收斂;4) 此網路能非常精確地模擬未知系統的動態行為並達到滿意的效能。
This thesis presents a Hammerstein-Wiener recurrent neural network with a systematic identification algorithm for identifying unknown dynamic nonlinear systems. The proposed recurrent neural network resembles the conventional Hammerstein-Wiener model that consists of a dynamic linear subsystem embedded between two static nonlinear subsystems. The novelties of our network include: 1) the three subsystems are integrated into a single recurrent neural network whose output is the nonlinear transformation of a linear state-space equation; 2) the well-developed theory of linear systems can be applied directly to linear subsystem of the trained network to analyze its characteristics. To identify a given unknown system efficiently from the input-output measurements, we have derived a systematic identification algorithm that consists of parameter initialization and online stable learning procedures. A frequency domain eigensystem realization algorithm (FDERA) has developed to acquire the system size and to initialize a best-fit state-space representation. To improve the overall identification performance, we first derived an online parameter learning algorithm based on the ordered derivatives. Moreover, to avoid instability of dynamic systems caused by parameter tuning, we have incorporated necessary constraints with the original learning algorithm to form a stable learning algorithm. Finally, computer simulations and comparisons with some existing models have conducted to demonstrate the effectiveness of the proposed network and its identification algorithm. These simulations validate the followings: 1) the proposed network initialization algorithm can provide better initialization than a random initialization approach; 2) with suitable constraints, the proposed stable learning algorithm can ensure the network stability and convergence capability during and after training; 3) the proposed network can closely emulate the behavior of the unknown dynamical system with a satisfactory performance.
CHINESE ABSTRACT i
ABSTRACT iii
ACKNOWLEDGEMENT v
LIST OF TABLES viii
LIST OF FIGURES ix
1 Introduction 1-1
1.1 Motivation 1-1
1.2 Literature Survey 1-2
1.3 Purpose of the Study 1-5
1.4 Organization of the Thesis 1-6
2 Hammerstein-Wiener Recurrent Neural Network 2-1
2.1 Structure of Hammerstein-Wiener Recurrent Neural Network 2-1
2.2 Universal Approximation Capability of HWRNN 2-6
3 Identification Algorithm of Hammerstein-Wiener Recurrent Neural Network 3-1
3.1 Hybrid Hammerstein-Wiener Initialization Algorithm 3-2
3.1.1 Active Region Boundary Initialization Algorithm 3-5
3.1.2 Frequency Domain Eigensystem Realization Algorithm 3-6
3.1.3 Least-Squares Algorithm 3-11
3.1.4 Computational Steps of HHWIA 3-11
3.2 Recursive Recurrent Learning Algorithm 3-12
3.3 Modified Recursive Recurrent Learning Algorithm with Stability Constraints 3-17
3.3.1 Stability Constraints for Online Parameter Learning 3-19
3.3.2 Parameter Learning Algorithm with Stability Constraints 3-22
3.4 Convergence Constraints of Hammerstein-Wiener Recurrent Neural Network 3-24
4 Simulation Results 4-1
5 Conclusions and Future Work 5-1
5.1 Conclusions 5-1
5.2 Future Work 5-2
References
[1] J. Abonyi, L. Nagy, and F. Szeifert, “Adaptive fuzzy inference system and its application in modelling and model based control,” Chemical Engineering Research and Design, vol. 77, pp. 281-290, 1999.
[2] R. Abrahamsson, S. M. Kay, and P. Stoica, “Estimation of the parameters of a bilinear model with applications to submarine detection and system identification,” Digital Signal Processing, vol. 17, pp. 756-773, 2007.
[3] P. J. Antsaklis, and A. N. Michel, Linear systems. New York: McGraw-Hill, 1998.
[4] E. W. Bai, “An optimal two-stage identification algorithm for Hammerstein-Wiener nonlinear system,” Automatica, vol. 34, no. 3, pp. 333-338, 1998.
[5] N. E. Barabanov, and D. V. Prokhorov, “Stability analysis of discrete-time recurrent neural networks,” IEEE Trans. Neural Networks, vol. 13, no.2, pp. 292-303, 2002.
[6] M. Benson and R. A. Carrasco, “Recurrent neural network array for CDMA mobile communication systems,” Electronics Letters, vol. 33, no. 25, pp. 2105-2106, 1997.
[7] J. Cao, K. Yuan, and H. X. Li, “Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays,” IEEE Trans. Neural Networks, vol. 17, no. 6, pp. 1646-1651, 2006.
[8] K. H. Chan, J. Bao, and W. J. Whiten, “A new approach to control of MIMO processes with static nonlinearities using an extended IMC framework,” Computers and Chemical Engineering, vol. 30, pp. 329-342, 2005.
[9] C. L. Chang, K. W. Fan, I. F. Chung, and C. T. Lin, “A recurrent fuzzy coupled cellular neural network system with automatic structure and template learning,” IEEE Trans. Circuits and Systems-II, vol. 53, no. 8, pp. 602-606, 2006.
[10] K. Chellapilla and S. S. Rao, “Optimization of bilinear time series models using fast evolutionary programming,” IEEE Trans. Signal Processing Letters, vol. 5, no. 2, pp. 39-42, 1998.
[11] W. Y. Chen, Y. F. Liao, and S. H. Chen, “Speech recognition with hierarchical recurrent neural networks,” Pattern Recognition, vol. 28, no. 6, pp. 795-805, 1995.
[12] E. K. P. Chong and S. H. Zak, An introduction to optimization, New York: Wiley, 2001.
[13] T. W. S. Chow, and Y. Fang, “A recurrent neural-network-based real-time learning control strategy applying to nonlinear systems with unknown dynamics,” IEEE Trans. Industrial Electronics, vol. 45, no. 1, pp. 151-161, 1998.
[14] P. Crama and J. Schoukens, “Hammerstein-Wiener system estimator initialization,” Automatica, vol. 40, pp.1543-1550, 2004.
[15] A. Deczky, “Synthesis of recursive filters using the minimum p-error criterion,” IEEE Trans. Audio Electroacoustics, vol. 20, no. 4, pp. 257-263, 1972.
[16] J. P. Draye, D. Pavisic, G. Cheron, and G. Libert, “An inhibitory weight initialization improves the speed and quality of recurrent neural networks learning,” Neurocomputing, vol. 16, pp. 207-224, 1997.
[17] B. Dumitrescu and R. Niemistö, “Multistage IIR filter design using convex stability domains defined by positive realness,” IEEE Trans. Signal Processing, vol. 52, no. 4, pp. 962-974, 2004.
[18] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, pp. 179-211, 1990.
[19] P. Frasconi, M. Gori, and G. Soda, “Local feedback multilayered networks,” Neural Computation, vol. 4, no. 1, pp. 120-130, 1992.
[20] K. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, vol. 2, no. 3, pp. 183-192, 1989.
[21] K. Funahashi and Y. Nakamura, “Approximation of dynamical systems by continuous time recurrent neural networks,” IEEE Trans. Neural Network, vol. 6, pp. 801-806, 1993.
[22] C. Gan and K. Danai, “Model-based recurrent neural network for modeling nonlinear dynamic systems,” IEEE Trans. Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 30, no. 2, pp. 344-351, 2000.
[23] M. Gori, M. Mozer, A. C. Tsoi, and R. L. Watrous, “Presenting the special issue on recurrent neural networks for sequence processing,” Neurocomputing, vol. 15, pp. 181-182, 1997.
[24] B. L. Ho and R. E. Kalman, “Effective construction of linear state-variable models from input/output data,” Proc. of the 3rd Annual Allerton Conf. on Circuit and System Theory, vol. 14, pp. 449-459, 1965.
[25] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2 no. 5, pp. 359-366, 1989.
[26] A. Janczak, Identification of nonlinear systems using neural networks and polynomial models: a block-oriented approach, New York: Springer-Verlag, 2005.
[27] S. Y, Jeong and S. Y. Lee, “Adaptive learning algorithms to incorporate additional functional constraints into neural networks,” Neurocomputing, vol. 35, pp. 73-90, 2000.
[28] L. Jin, P. N. Nikiforuk, and M. M. Gupta, “Approximation of discrete-time state-space trajectories using dynamic recurrent neural networks,” IEEE Trans. Automatic control, vol. 40, no. 7, pp. 1266-1270, 1995.
[29] J. N. Juang and R. S. Pappa, “An eigensystem realization algorithm for modal parameter identification and model reduction,” J. Guidance, vol. 8, no. 5, pp. 620-627, 1985.
[30] J. N. Juang and H. Suzuki, “An Eigensystem Realization Algorithm in Frequency Domain for modal parameter identification,” Journal of Vibration, Acoustics, Stress, and Reliability in Design, vol. 110, pp. 24-29, 1988.
[31] C. F. Juang and C. T. Lin, “A recurrent self-organizing neural fuzzy inference network,” IEEE Trans. Neural Networks, vol. 10, no. 4, pp. 828-845, 1999.
[32] A. D. Kalafatis, L. Wang, and W. R. Cluett, “Linearizing feedforward-feedback control of PH processes based on the Wiener model,” Journal of Process Control, vol. 15, pp.103-112, 2005.
[33] S. J. Kang, C. H. Woo, H. S. Hwang, and K. B. Woo, “Evolutionary design of fuzzy rule base for nonlinear system modeling and control,” IEEE Trans. Fuzzy Systems, vol. 8, no. 1, pp. 37-45, 2000.
[34] D. Kukolj and E. Levi, “Identification of complex systems based on neural and Takagi-Sugeno fuzzy model,” IEEE Trans. Systems, Man and Cybernetics, part B: Cybernetics, vol. 34, no. 1, pp.272-282, 2004.
[35] K. C. Kwak and D. H. Kim, “Adaptive Neuro-Fuzzy networks with the aid of fuzzy granulation,” IEICE Trans. Information and Systems, vol. E88-D, no. 9, pp. 2189-2195, 2005.
[36] M. C. Lang, “Least-Squares design of IIR filters with prescribed magnitude and phase responses and a pole radius constraint,” IEEE Trans. Signal Processing, vol. 48, no. 11, pp. 3109-3121, 2000.
[37] C. H. Lee and C. C. Teng, “Identification and control of dynamic systems using recurrent fuzzy neural networks,” IEEE Trans. Fuzzy Systems, vol. 8, no. 4, pp. 349-366, 2000.
[38] X. D. Li, J. K. L. Ho, and T. W. S. Chow, “Approximation of dynamical time-variant systems by continuous-time recurrent neural networks,” IEEE Trans. Circuits and Systems, vol. 52 no. 10, pp. 656-660, 2005.
[39] E. A. Medina, R. D. Irwin, J. R. Mitchell, and A. P. Bukley, “MIMO system identification using frequency response data,” The Journal of the Astronautical Sciences, vol. 42, no. 1, pp. 113-129, 1994.
[40] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural network,” IEEE Trans. Neural Networks, vol. 1, no. 1, pp. 4-27, 1990.
[41] K. S. Narendra and K. Parthasarathy, “Gradient methods for optimization of dynamical systems containing neural networks,” IEEE Trans. Neural Networks, vol. 2, no. 2, pp. 252-262, 1991.
[42] O. Nelles, Nonlinear System Identification. New York: Springer-Verlag, 2001.
[43] S. K. Oh, M. S. Kim, T. D. Eom, and J. J. Lee, “Heterogeneous local model networks for time series prediction,” Applied Mathematics and computation, vol. 168, 164-177, 2005.
[44] S. Osowski, “New approach to selection of initial values of weights in neural function approximation,” Electronics Letters, vol. 29, no. 3, pp. 313-315, 1993.
[45] B. A. Pearlmutter, “Gradient calculations for dynamic recurrent neural networks: A survey,” IEEE Trans. Neural Networks, vol. 6, no. 5, pp. 1212-1228, 1995.
[46] D. T. Pham and X. Liu, “Identification of linear and nonlinear dynamic systems using recurrent neural networks,” Artificial Intelligence in Engineering, vol. 8, no. 1, pp. 67-75, 1993.
[47] R. Quan, “System identification using frequency scanning and the eigensystem realization algorithm,” Journal of Guidance, Control, and Dynamics, vol. 17, no. 4, pp. 670-675,1994.
[48] O. J. Rojas, J. Bao, and P. L. Lee, “A dynamic operability analysis approach for nonlinear processes,” Journal of Processes Control, vol.17, pp.157-172, 2007.
[49] W. Rudin, Principles of Mathematical Analysis, New York: McGraw-Hill, 1976.
[50] P. S. Sastry, G. Santharam, and K. P. Unnikrishnan, “Memory neuron networks for identification and control of dynamical systems,” IEEE Trans. Neural Networks, vol. 5, no. 2, pp. 306-319, 1994.
[51] A. Savran, “Multifeedback-layer neural network,” IEEE Trans. Neural Networks, vol. 18, no. 2, pp.373-384, 2007.
[52] S. Soltani, “On the use of the wavelet decomposition for time series prediction,” Neurocomputing, vol. 48, 267-277, 2002.
[53] A. Tarczyński, G. D. Cain, E. Hermanowicz, and M. Rojewski, “A WISE method for designing IIR Filters,” IEEE Trans. Signal Processing, vol. 49, no. 7, pp. 1421-1432, 2001.
[54] G. Thimm and E. Fiesler, “High-order and multilayer perceptron initialization,” IEEE Trans. Neural Networks, vol. 8, no. 2, pp. 349-359, 1997.
[55] A. C. Tsoi and A. D. Back, “Discrete time recurrent neural network architectures: a unifying review,” Neurocomputing, vol. 15, pp. 183-223, 1997.
[56] J. S. Wang and Y. P. Chen, “A fully automated recurrent neural network for unknown dynamic system identification and control,” IEEE Trans. Circuits and Systems, vol. 53, no. 6, pp. 1363-1372, 2006.
[57] J. S. Wang and C. S. G. Lee, “Self-adaptive recurrent neuro-fuzzy control of an autonomous underwater vehicle,” IEEE Trans. Robotics and Automation, vol. 19, no. 2, pp. 283-295, 2003.
[58] Y. Wang and G. Rong, “A self-organizing neural-network-based fuzzy system,” Fuzzy Sets and Systems, vol. 103, pp. 1-11, 1999.
[59] L. X. Wang and J. M. Mendel, “Generating fuzzy rules by learning from examples,” IEEE Trans. Systems, Man and Cybernetics, vol. 22, no. 6, pp. 1414-1427, 1992.
[60] P. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavior Sciences,” Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1974.
[61] D. T. Westwick and R. E. Kearney, “Separable least squares identification of nonlinear Hammerstein models: application to stretch reflex dynamics,” Annals of Biomedical Engineering, vol. 29, pp. 707-718, 2001.
[62] R. J. William and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks,” Neural Computation, vol. 1, pp. 270-280, 1989.
[63] J. Y. F. Yam and T. W. S. Chow, “A weight initialization method for improving training speed in feedforward neural network,” Neurocomputing, vol. 30, no. 1-4, pp. 219-232, 2000.
[64] Y. F. Yam and T. W. S. Chow, “Feedforward networks training speed enhancement by optimal initialization of the synaptic coefficients,” IEEE Trans. Neural Networks, vol. 12, no. 2, pp. 430-434, 2001.
[65] Y. F. Yam, T. W. S. Chow, and C. T. Leung, “A new method in determining initial weights of feedforward neural networks for training enhancement,” Neurocomputing, vol. 16, no. 1, pp. 23-32, 1997.
[66] Y. Zhu, “Estimation of an N-L-N Hammerstein-Wiener model,” Automatica, vol. 38, pp. 1607-1614, 2002.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top