# 臺灣博碩士論文加值系統

(44.192.20.240) 您好！臺灣時間：2024/03/04 14:12

:::

### 詳目顯示

:

• 被引用:0
• 點閱:265
• 評分:
• 下載:0
• 書目收藏:0
 在傳統的類神經網路，通常以點對點的方式在時域當中進行系統的建構。而所建構的網路以適應性網路進行網路鍵結值的學習，學習演算法利用偏導數在縮小誤差時容易落入區域性最小值之中。近來，一種稱為函數型類神經網路架構已被發表。函數型網路架構使用數學的方式進行函數對函數的輸出輸入對訓練網路之鍵結值。因為函數型網路使用平行處理之數學計算，可以快速估計出約略所需的輸出鍵結值。由於它優異的函數近似能力，使得我們可以將其運用在系統之頻域響應當中。長久以來，由於頻域的脈衝特性使得傳統性的點對點網路難以學習，在這當中還有頻域之中的複數特性（大小及相位）需要比較精確的比例才能使系統之輸出正確還原到時域之中。在這篇論文當中，我們以函數型網路為網路主體，運用其良好函數近似能力去趨近受建構系統之頻率響應；進行系統分析之後，此時的網路已經近似於受控體，加上複數型學習網路機制，適當地調整複數鍵結值並且保持相位的正確。為使學習可以達到所期望的輸出，在此列舉四種複數網路學習機制，其中的傳統學習法並不能正確還原受建構系統時域輸出。因此我們依照系統所需還原及複數網路特性去打造另一種學習法，結合原有之函數型神經網路主體，為頻域中之混合式函數型類神經網路。經由模擬的結果可以明顯驗證函數型網路在頻域展現出比時域更優良的性能；加入學習機制後的神經網路，或稱為頻域中之混合式函數型網路則表現出速度快、高近似度、以及強健度的效果。
 In traditional neural networks, systems are modeled in the time domain by using point-to-point input-output (I/O) training pairs. Traditional neural networks use adaptive approaches to update weights in the networks. Most learning algorithms minimize an error function in a gradient descent manner and inevitably sometimes are trapped in local minima. Recently, the so-called functional artificial neural networks (FANN) were proposed. FANN trains weights in neural networks mathematically by using functional I/O pairs. Because of parallel and mathematical computation, FANN is fast to calculate the desired weights approximately. The idea of this research is to model a system through FANN by modeling the frequency response of the system. The frequency response is composed by finite impulse signals. It is difficult for traditional neural networks to learn impulse signals. Besides, complex representations (amplitude and phase) in the frequency domain are also important factors for reconstruction of signals in the time domain. In this thesis, traditional FANN is adopted as the basic structure to model signals in the frequency domain. In the proposed algorithm, a procedure to learn complex-valued neural networks is introduced. The procedure can update weights and keep the phase correct to avoid distortion. Four ways of learning complex-valued weights in FANN are proposed and studied in this research. Two of them are straightforward approaches in dealing with complex numbers and from our simulations it can be seen that they cannot reconstruct time domain output signals accurately enough. The other approach is to update weights in a more elegant way. From the simulation, the functional artificial neural network in the frequency domain by using this algorithm can have better performance than that of the other algorithms and also better than that of the FANN in the time domain.
 摘要 i Abstract ii Contents iii List of Tables v List of Figures vi 1Introduction……………………………………………………………………1 1.1Introduction……………………………………………………………………1 1.2Research Objectives and Contributions……………………………………..2 1.3Organization of the Thesis……………………………….…………………..3 2Functional Neural-Network Architecture……………………………………4 2.1Radial Basis Function Neural Networks……………………………………5 2.2Wavelet Neural Networks……………………………………………………8 2.3The Overview of the FANN…………………………………………………10 2.3.1The Volterra Series…………………………………………………………..10 2.3.2Functional Artificial Neural Networks………………………………………12 2.3.3Single-input Single-output Nonlinear System……………………………….14 2.3.4Multi-input Multi-output Nonlinear System………………………………16 2.3.5Discrete-time Functional Artificial Neural Networks………………………18 2.4Application of FANN………………………………………………………….19 2.4.1Dynamical Functional Artificial Neural Network………………………….19 2.4.2VLSI Implementation…………………………………………………….….20 2.5Chapter Summary……………………………………………………………21 3Functional Neural Network Architecture in the Frequency Domain…….23 3.1The Structure………………………………………………………………...23 3.2The Algorithms……………………………………………………………...25 3.3Chapter Summary……………………………………………………………30 4Experiment……………………………………………………………….….31 4.1An Example of FANN Application…………………………………………31 4.2Results………………………………………………………………………....37 4.3Discussion……………………………………………………………………...43 4.4Chapter Summary……………………………………………………………46 5Conclusions……………………………………………………………………47 5.1Conclusions…………………………………………………………………..47 5.2Future Research……………………………………………………………...48 Reference…………………………………………………………………………….49 List of Tables 4.1 The parameters in training patterns and testing patterns…………………36 4.2 The parameters in the experiment…………………………………………….38 4.3 Error of training phase computed in the time domain………………………42 4.4 Error of testing phase computed in the time domain……………………….42 4.5 Error of testing phase computed in the frequency domain………………….43 List of Figures 2.1 A general structure of the neural networks…………………………………..5 2.2 The structure of the RBFN……………………………………………………..6 2.3 The structure of the WNN……………………………………………………...9 2.4 The Discrete single-input single-output FANN structure……………………14 2.5 Multi-input Multi-output FANN structure……………………………………16 2.6 The single-input single-output D-FANN structure…………………………..20 3.1 The modified structure of FANN…………………………………………….23 3.2(a) Output before learning ………………………………………………….28 3.2(b) Desire output……………………………………………………………..28 3.2(c) Algorithm must to depress the signal we do not want and grows the wanted signal up………………………………………………………………….…………28 3.3 The modified structure of FANN………………………………………………29 4.1 The example and the parameters we define………………………………….31 4.2. Floor-plan for eight case of road segments…………………………………...31 4.3 (a) Training road segments …………………………………………………….36 4.3 (b) Testing road segments…………………………………………………….36 4.4 (a) Eight training input vectors ………………………………………………37 4.4 (b) Eight training output vectors ……………………………………………..37 4.4 (c) Eight testing input vectors ……………………………………………….37 4.4 (d) Eight testing output vectors……………………………………………….37 4.5 (a) The I/O pairs in FANN a-part …………………………………………….38 4.5(b) The I/O pairs in FANN b-part……………………………………………..38 4.6 The vectors of output angle (a) Algorithm 0 …………………………………39 4.6 (b) Algorithm 1………………………………………………………………..39 4.6 (c) Algorithm 2…………………………………………………………………..39 4.6 (d) Algorithm 3 ……………………………………………………………….39 4.6 (e) Algorithm 4…………………………………………………………………..39 4.7 The learning curve using Algorithm 3……………………………………….40 4.8 Kaiser Window (a)amplitude………………………………………………….41 4.8 Kaiser Window (b)phase………………………………………………………41 4.9 The input signals,(a) noise =0.2~-0.2…………………………………………42 4.9 (b)outlier……………………………………………………………………….42 4.10 Eight I/O pairs in the frequency domain……………………………………..44 4.11 Eight I/O pairs in the frequency domain…………………………………….44 4.12 The amplitude of experiment output and desire output……………………45 4.13 The phase of experiment output and desire output………………………….46