跳到主要內容

臺灣博碩士論文加值系統

(44.192.20.240) 您好!臺灣時間:2024/03/04 14:12
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:洪宏淵
論文名稱:頻域中之混合式函數型神經網路
論文名稱(外文):Mixed-mode functional artifical neural networks in frequency domain
指導教授:蘇順豐
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2000
畢業學年度:89
語文別:英文
中文關鍵詞:函數型神經網路
相關次數:
  • 被引用被引用:0
  • 點閱點閱:265
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在傳統的類神經網路,通常以點對點的方式在時域當中進行系統的建構。而所建構的網路以適應性網路進行網路鍵結值的學習,學習演算法利用偏導數在縮小誤差時容易落入區域性最小值之中。近來,一種稱為函數型類神經網路架構已被發表。函數型網路架構使用數學的方式進行函數對函數的輸出輸入對訓練網路之鍵結值。因為函數型網路使用平行處理之數學計算,可以快速估計出約略所需的輸出鍵結值。由於它優異的函數近似能力,使得我們可以將其運用在系統之頻域響應當中。長久以來,由於頻域的脈衝特性使得傳統性的點對點網路難以學習,在這當中還有頻域之中的複數特性(大小及相位)需要比較精確的比例才能使系統之輸出正確還原到時域之中。在這篇論文當中,我們以函數型網路為網路主體,運用其良好函數近似能力去趨近受建構系統之頻率響應;進行系統分析之後,此時的網路已經近似於受控體,加上複數型學習網路機制,適當地調整複數鍵結值並且保持相位的正確。為使學習可以達到所期望的輸出,在此列舉四種複數網路學習機制,其中的傳統學習法並不能正確還原受建構系統時域輸出。因此我們依照系統所需還原及複數網路特性去打造另一種學習法,結合原有之函數型神經網路主體,為頻域中之混合式函數型類神經網路。經由模擬的結果可以明顯驗證函數型網路在頻域展現出比時域更優良的性能;加入學習機制後的神經網路,或稱為頻域中之混合式函數型網路則表現出速度快、高近似度、以及強健度的效果。

In traditional neural networks, systems are modeled in the time domain by using point-to-point input-output (I/O) training pairs. Traditional neural networks use adaptive approaches to update weights in the networks. Most learning algorithms minimize an error function in a gradient descent manner and inevitably sometimes are trapped in local minima. Recently, the so-called functional artificial neural networks (FANN) were proposed. FANN trains weights in neural networks mathematically by using functional I/O pairs. Because of parallel and mathematical computation, FANN is fast to calculate the desired weights approximately. The idea of this research is to model a system through FANN by modeling the frequency response of the system. The frequency response is composed by finite impulse signals. It is difficult for traditional neural networks to learn impulse signals. Besides, complex representations (amplitude and phase) in the frequency domain are also important factors for reconstruction of signals in the time domain. In this thesis, traditional FANN is adopted as the basic structure to model signals in the frequency domain. In the proposed algorithm, a procedure to learn complex-valued neural networks is introduced. The procedure can update weights and keep the phase correct to avoid distortion. Four ways of learning complex-valued weights in FANN are proposed and studied in this research. Two of them are straightforward approaches in dealing with complex numbers and from our simulations it can be seen that they cannot reconstruct time domain output signals accurately enough. The other approach is to update weights in a more elegant way. From the simulation, the functional artificial neural network in the frequency domain by using this algorithm can have better performance than that of the other algorithms and also better than that of the FANN in the time domain.

摘要 i
Abstract ii
Contents iii
List of Tables v
List of Figures vi
1Introduction……………………………………………………………………1
1.1Introduction……………………………………………………………………1
1.2Research Objectives and Contributions……………………………………..2
1.3Organization of the Thesis……………………………….…………………..3
2Functional Neural-Network Architecture……………………………………4
2.1Radial Basis Function Neural Networks……………………………………5
2.2Wavelet Neural Networks……………………………………………………8
2.3The Overview of the FANN…………………………………………………10
2.3.1The Volterra Series…………………………………………………………..10
2.3.2Functional Artificial Neural Networks………………………………………12
2.3.3Single-input Single-output Nonlinear System……………………………….14
2.3.4Multi-input Multi-output Nonlinear System………………………………16
2.3.5Discrete-time Functional Artificial Neural Networks………………………18
2.4Application of FANN………………………………………………………….19
2.4.1Dynamical Functional Artificial Neural Network………………………….19
2.4.2VLSI Implementation…………………………………………………….….20
2.5Chapter Summary……………………………………………………………21
3Functional Neural Network Architecture in the Frequency Domain…….23
3.1The Structure………………………………………………………………...23
3.2The Algorithms……………………………………………………………...25
3.3Chapter Summary……………………………………………………………30
4Experiment……………………………………………………………….….31
4.1An Example of FANN Application…………………………………………31
4.2Results………………………………………………………………………....37
4.3Discussion……………………………………………………………………...43
4.4Chapter Summary……………………………………………………………46
5Conclusions……………………………………………………………………47
5.1Conclusions…………………………………………………………………..47
5.2Future Research……………………………………………………………...48
Reference…………………………………………………………………………….49
List of Tables
4.1 The parameters in training patterns and testing patterns…………………36
4.2 The parameters in the experiment…………………………………………….38
4.3 Error of training phase computed in the time domain………………………42
4.4 Error of testing phase computed in the time domain……………………….42
4.5 Error of testing phase computed in the frequency domain………………….43
List of Figures
2.1 A general structure of the neural networks…………………………………..5
2.2 The structure of the RBFN……………………………………………………..6
2.3 The structure of the WNN……………………………………………………...9
2.4 The Discrete single-input single-output FANN structure……………………14
2.5 Multi-input Multi-output FANN structure……………………………………16
2.6 The single-input single-output D-FANN structure…………………………..20
3.1 The modified structure of FANN…………………………………………….23
3.2(a) Output before learning ………………………………………………….28
3.2(b) Desire output……………………………………………………………..28
3.2(c) Algorithm must to depress the signal we do not want and grows the wanted signal up………………………………………………………………….…………28
3.3 The modified structure of FANN………………………………………………29
4.1 The example and the parameters we define………………………………….31
4.2. Floor-plan for eight case of road segments…………………………………...31
4.3 (a) Training road segments …………………………………………………….36
4.3 (b) Testing road segments…………………………………………………….36
4.4 (a) Eight training input vectors ………………………………………………37
4.4 (b) Eight training output vectors ……………………………………………..37
4.4 (c) Eight testing input vectors ……………………………………………….37
4.4 (d) Eight testing output vectors……………………………………………….37
4.5 (a) The I/O pairs in FANN a-part …………………………………………….38
4.5(b) The I/O pairs in FANN b-part……………………………………………..38
4.6 The vectors of output angle (a) Algorithm 0 …………………………………39
4.6 (b) Algorithm 1………………………………………………………………..39
4.6 (c) Algorithm 2…………………………………………………………………..39
4.6 (d) Algorithm 3 ……………………………………………………………….39
4.6 (e) Algorithm 4…………………………………………………………………..39
4.7 The learning curve using Algorithm 3……………………………………….40
4.8 Kaiser Window (a)amplitude………………………………………………….41
4.8 Kaiser Window (b)phase………………………………………………………41
4.9 The input signals,(a) noise =0.2~-0.2…………………………………………42
4.9 (b)outlier……………………………………………………………………….42
4.10 Eight I/O pairs in the frequency domain……………………………………..44
4.11 Eight I/O pairs in the frequency domain…………………………………….44
4.12 The amplitude of experiment output and desire output……………………45
4.13 The phase of experiment output and desire output………………………….46

[1] K. H. Chon, N. H. Holstein-Rathlou, D. J. Marsh, and V. Z. Marmarelis, “Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks,” IEEE Trans. on Neural Networks, vol. 9, no.3, pp. 430-435, May 1998.
[2] A. Zaknich,“ Introduction to the modified probabilistic neural network for general signal processing applications,” IEEE Trans. on Signal Processing, vol. 46, no. 7, pp. 1980-1990, July 1998.
[3] M. Iatrou, T. W. Berger, and V. Z. Marmarelis,” Modeling of nonlinear nonstationary dynamic systems with a novel class of artificial neural networks,” IEEE Trans. on Neural Networks, vol. 10, no. 2, pp. 327-339, March 1999.
[4] C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems: A Neural Fuzzy Synergism to Intelligent Systems, Prentice Hall, 1996.
[5] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, 1999.
[6] B. Kosko, Neural Networks and Fuzzy System, A Dynamic Systems Approach to Machine Intelligence, Prentice Hall, 1996.
[7] G. W. Davis and M. L. Gasperi, “ANN modeling of Volterra systems,” Proc. IEEE Conf. Decision and Control, pp. 20-31, 1990.
[8] D. A. Panagiotopoulos, R. W. Newcomb, and S. K. Singh, “Planning with a functional neural-network architecture,” IEEE Trans. on Neural Networks, vol. 10, no. 1, pp. 115-127, January 1999.
[9] D. A. Panagiotopoulos, S. K. Singh, T. R. Darden, and R. W. Newcomb, “Hardware oriented semistate descriptions of functional artificial neural networks,” IEEE Trans. on Neural Networks, vol. 2, pp. 1197-1198, 1994.
[10] D. A. Panagiotopoulos, S. K. Singh, and R. W. Newcomb, “VLSI implementation of a functional neural network,” 1997 IEEE International Symposium on Circuits and Systems, Hong Kong, pp. 701-704, June 1997.
[11] L. V. Zyla and R. J. P. de Figueiredo, “Nonlinear system identification based on a Fock space framework,” SIAM J. Contr. Optimization, vol. 21,no. 6, pp. 931-939, Nov. 1983.
[12] R. J. P. deFigueiredo and T. Eltoft, “A DCT-Based D-FANN for nonlinear adaptive time series prediction,” IEEE tractions on Circuits And System-II:Analog And Digital Signal Processing, vol.,47, no. 10, Oct. 2000.
[13] C. Citterio, A. Pelagotti, V. Piuri, and L. Rocca, “Function approximation-a fast-convergence neural approach based on spectral analysis,” IEEE Trans. on Neural Network, vol. 10, no. 4, July 1999.
[14] R. J. P. de Figueiredo, “ The OI, OS, OMNI, and OSMAN networks as best approximations of nonlinear systems under training data constraints,” Proceedings of IEEE ISCAS ’96, Atlanta, pp. 349-352, May 1996.
[15] G. Georgiou and C. Koutsougeras, “Complex domain backpropagation,” IEEE Trans. Circuits Syst. II, vol. 39, pp. 330-334, 1992.
[16] M. S. Kim and C .C. Guest, “ Modification of back-propagation for complex valued signal processing in frequency domain,” in Proc. Int. Joint Conf. Neural Networks, San Diego, pp. 27-31, 1990.
[17] H. Leung and S. Haykin, “The complex backpropagation algorithm,” IEEE Trans. Signal Processing, vol. 39, pp. 2101-2104, 1991.
[18] B. Igelnik, M. Tabib-Azar, and S. R. LeClair, “A net with complex weights,” IEEE Trans. Neural Networks, vol. 12, pp. 236-249, Mar. 2001.
[19] E. B. Kostmatopoulos, P. A. Ioannou, and M. A. Christodoulou, “ Identification of nonlinear systems using new dynamic neural network structures,” Proc. IEEE Conf. Decision and Control, pp. 20-25, 1992.
[20] F. C. Chen and H. K. Khalil,“ Adaptive control of a class nonlinear discrete-time systems using neural networks,” IEEE Trans. Automat. Contr., vol. 40, pp. 791-801, 1995.
[21] S. Jagannatthan and F. L. Lewis, “ Discrete-time neural net controller for a class of nonlinear dynamical system,” IEEE Trans. Automat. Contr., vol. 41, pp. 1693-1699, 1996.
[22] K. S. Narendra and S. Mukhopadhyay, ”Adaptive control using neural networks and approximate models,” IEEE Trans. Neural Networks, vol. 8, pp. 475-485, 1997.
[23] K. S. Narendra and K. Parthasarathy, ”Gradient methods for the optimization of dynamical systems containing neural networks,” IEEE Trans. Neural Networks, vol. 2, pp. 252-262, 1991.
[24] W. Li and J. J. E. Slotine, “ Neural Network Control of Unknown Nonlinear Systems,” Proc. of the 1989 Contr., pp. 1136-1141. 1989.
[25] C. M. Bishop, “Curvature-driven smoothing in back propagation neural networks,” Proc. IEEE Int. Neural Networks, pp. 749-753, 1990.
[26] D. Lowe, “The radial basis function network-Techniques, principles, and hints,” RSRE Malvern, U.K., 1992.
[27] -, “Nonlocal radial basis functions for forecasting and density estimation,” in Proc. IEEE Int. Conf. Neural Networks, vol. 2, pp. 1197-1198, 1994.
[28] A. R. Webb, “ An approach to nonlinear principal components analysis using radially symmetric kernel functions,” Statist. Comput., vol. 6, pp. 159-168, 1996.
[29] R. W. Newcomb and R. J. P. deFigueiredo, ” A functional artificial neural network,” Proceddings of the 3rd International Conference on Automation, Robotics, and Computer Vision, Sigapore, pp. 566-570, Nov. 1994.
[30] D. J. H. Wilson and G. W. Irwin, “ Multivariate SPC using radial basis functions,” in Proc. UKACC Int. Conf. Contr. 98, vol. 1, pp. 479-484, 1-4 Sept. 1998.
[31] K. Homik, M. Stinchombe, and H. White, “ Multilayered feedward networks are universal approximatiors,” Neural Networks, vol. 2, no. 5, pp. 359-366, 1989.
[32] J. Platt, “A resourse allocating network for function interpolation,” Neural Comput., vol. 3, no. 2, pp. 213-225, 1991.
[33] B. Frizke, “Supervised learning with growing cell structures,” Advances in Neural Information Processing System, J. D. Cowan, G. Tesauro, and J. Alspector, Eds. San Mateo, CA: Morgan Kaufmann, vol. 6, pp.255-262, 1994.
[34] M. Frean and J. p. Nadal, “ The upstart algorithm: A method for constructing and training feedforward neural networks,” Neural Comput., vol. 2, no. 2, pp. 1989, 1990.
[35] Y. M. El-Fattah, “ Recursive self-tuning algorithm for adaptive Kalman filtering,” Proc. IEEE, vol. 130, no. 6, pp.341-344, 1983.
[36] A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-time Signal Process,. Prentice Hall, 1996.
[37] R. J. P. de Figueiredo,“ A new nonlinear functional analytic framework for modeling artificial neural networks”, Proceeding IEEE Int. Symp. Circuits and Systems, New Orlands, LA, pp. 723-76, May 1-3, 1990.
[38] R. J. P. de Figueiredo, “Optimal interpolating and smoothing functional artificial neural networks (FANN’s) based on a generalized Fock space framework,” IEEE Trans. Circuits Syst. Signal Processing, vol. 17, no 2, pp.271-287, 1998.
[39] R. W. Newcomb and R. J. P. de Figueiredo, “ A functional artificial neural networks,” Preceedings of the International Conference on Automation, Robotics, and Computer Vision, Sigapore, pp. 566-570, Nov. 1994.
[40] G. H. Yamker,” Radial basis functions for multivariable interpolation: A review,” in proc. IMA conf Algorithms for the Approximation of Functions and Data, Shrivenham, U.K., 1985.
[41] A.C.Micchelli, “Interpolation of scattered data: distance matrices and conditionally positive definite functions,” IEEE Trans. Construct. Approx., vol. 2, pp.11-22, 1986.
[42] D.S. Broomhead and D. Lowe, “Multivariable functional interpolation and adaptive network,” IEEE Trans. Complex System, vol. 2, pp. 321-355, Nov. 1996.
[43] E. J. Hartman, J.D. Keeler, and J. M. Kawalski, “Layered neural networks with Gaussian hidden units as universal approximator,” IEEE Trans. Neural Networks, vol. 35, no. 2, pp. 210-215, 1990.
[44] S. Lee and R. M. Kil, “A Gaussian potential function network with hierarchically self-organizing learning,” IEEE Trans. Neural Networks, vol. 4, pp. 207-224, 1991.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top