跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.87) 您好!臺灣時間:2025/02/17 11:23
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:王原彬
研究生(外文):WANG, YUAN-BIN
論文名稱:遞迴式與前饋式多層感知機之研究與實作
論文名稱(外文):Investigations and Implementations for Recurrent Neural Networks and Feedforward Multiple Layer Perceptron
指導教授:柯賢儒柯賢儒引用關係
指導教授(外文):KO, HSIEN-JU
口試委員:柯賢儒蕭進松程德勝
口試委員(外文):KO, HSIEN-JUHSIAO, CHIN-SUNGCHING, TAK-SHING
口試日期:2019-07-29
學位類別:碩士
校院名稱:亞洲大學
系所名稱:光電與通訊學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:60
中文關鍵詞:多層前饋式類神經網路多層遞迴式類神經網路
外文關鍵詞:multiple layer feedforward neural networksmultiple layer recurrent neural network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:147
  • 評分評分:
  • 下載下載:23
  • 收藏至我的研究室書目清單書目收藏:0
本論文研究對象為單層與多層前饋式類神經網路(FNN)與遞迴式類神經網路(RNN)在學習效率上的比較,在RNN的架構中,我本以無限脈衝響應濾波器擔任訊號遞迴的角色。其中,我們採用分段式線性啟動函數,且在實現RNN時,我們針對極點與 範數靈敏度同時進行最佳化,每個神經元的權重和偏移量則是使用倒傳遞學習演算法進行更新,最後呈現類神經對超越函數 以及組合函數的學習效果,結果顯示,針對低複雜度函數的學習,單層RNN表現略優於FNN,然而面對較高複雜度的函數所需學習次數單層RNN學習次數可明顯下降,說明在神經元數量較少的情形下RNN能有效降低學習次數。若提升層數時FNN與RNN的學習次數以FNN較低。
In this thesis, the learning efficiency of the single layer and multiple layer feedforward neural networks (FNN), as well as recurrent neural networks (MLRNN) were investigated. In the RNN structure, piecewise linear activation functions were used. In addition, infinite impulse response digital filter played the role of signal recursions. In RNN implementation, pole-L_2 sensitivity minimization was performed. The weight of every neuron was adjusted by using the back-propagation learning algorithm. In this thesis, a simple sinusoidal function and a relatively complicated function were used for comparison. The simulation result shows that single layer FNN and RNN are with similar learning efficiency in learning simple sinusoidal function. Whereas RNN is with higher efficiency in learning complicated functions
目錄 I
圖目錄 II
摘要 IV
ABSTRACT V
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 9
1.3 研究目的 9
第二章 問題描述 10
2.1 狀態空間 10
2.2 遞迴神經網路 11
第三章 研究方法 13
3.1 FNN之實現 13
3.2 RNN之實現 17
第四章 數值模擬與分析 20
第五章 結論 28
附錄A(本論文程式碼) 29
附錄B(本論文程式碼) 33
附錄C(本論文程式碼) 40
附錄D(本論文程式碼) 45
參考文獻 51
圖目錄
圖1- 1、感知機系統架構圖 1
圖1- 2、以啟動函數繪製之函數圖形 2
圖1- 3、前饋式類神經網路系統架構圖。 4
圖2- 1、IIR波濾器之第i個神經元狀態空間形式 11
圖3- 1、FNN架構圖 13
圖3- 2、以(3.1)式繪製之圖形 14
圖3- 3、RNN類神經架構 17
圖4- 1、雙層RNN收斂條件0.03 20
圖4- 2、雙層RNN收斂條件0.003 20
圖4- 4、組合函數收斂圖 26
圖4- 5、單層12個神經元RNN組合函數訓練結果 27
表目錄
表 1、單層FNN的 函數學習記錄 21
表 2、雙層FNN的 函數學習記錄 21
表 3、單層RNN的 函數學習記錄 22
表 4、雙層RNN的 函數學習記錄 22
表 5、FNN單層12個神經元 學習記錄 23
表 6、RNN單層12個神經元 學習記錄 23
表 7、組合函數學習次數 25




[1]M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry. MIT Press, 1969.
[2]D. E. Rumelhart, Explorations in parallel distributed processing. MIT Press, 1986.
[3]B. Monien, R. Preis, and S. Schamberger, “Approximation algorithms for multilevel graph partitioning,” Handb. Approx. Algorithms Metaheuristics, pp. 60-1-60–16, 2007.
[4]P. J. Werbos, “Beyond regression:New tools for prediciton and analysis in the behavioral sciences,” pp. II-18, 1974.
[5]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
[6]K. Patan, “Stability analysis and the stabilization of a class of discrete-time dynamic neural networks.,” IEEE Trans. Neural Netw., vol. 18, no. 3, pp. 660–73, 2007.
[7]B.-A. Pearlmutter, “Learning State Space Trajectories in Recurrent Neural Networks,” Neural Comput., vol. 1, no. 2, pp. 263–269, Jun.1989.
[8]Z. Wang, H. Zhang, and B. Jiang, “LMI-based approach for global asymptotic stability analysis of recurrent neural networks with various delays and structures.,” IEEE Trans. Neural Netw., vol. 22, no. 7, pp. 1032–1045, 2011.
[9]S. Haykin, Neural Networks and Learning Machines. 2008.
[10]T. Kailath, Linear Systems. Prentice-Hall, 1980.
[11]B. S. Chen and C.T. Kuo, “Stability analysis of digital filters under finite word-length effects,” IEE Proc. G Circuits, Devices Syst., vol. 136, no. 4, p. 167, 1989.
[12]G. Li, “On the structure of digital controllers with finite word length consideration.,” IEEE Trans. Automat. Contr., vol. 43, no. 5, pp. 689–693, May1998.
[13]T. Hinamoto, Y. Zempo, Y. Nishino, and W. S. Lu, “An analytical approach for the synthesis of two-dimensional state-space filter structures with minimum weighted sensitivity.,” IEEE Trans. Circuits Syst. I Fundam. Theory Appl., vol. 46, no. 10, pp. 1172–1183, 1999.
[14]T. Hinamoto, K. Iwata, and W.-S. Lu, “$L_2$-sensitivity minimization of one- and two-dimensional state-space digital filters subject to $L_2$-scaling constraints,” IEEE Trans. Signal Process., vol. 54, no. 5, pp. 1804–1812, 2006.
[15]R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” J. Basic Eng., vol. 82, no. 1, p. 35, 1960.
[16]J. E. Perkins, U. Helmke, and J. B. Moore, “Balanced realizations via gradient flow techniques,” Syst. \& Control Lett., vol. 14, no. 5, pp. 369–379, 1990.
[17]Sheng Hwang, “Minimum uncorrelated unit noise in state-space digital filtering,” IEEE Trans. Acoust., vol. 25, no. 4, pp. 273–281, 1977.
[18]D. Williamson, “Roundoff noise minimization and pole-zero sensitivity in fixed-point digital filters using residue feedback,” IEEE Trans. Acoust., vol. 34, no. 5, pp. 1210–1220, 1986.
[19]M. Gevers and G. Li, Parametrizations in Control, Estimation and Filtering Problems: Accuracy Aspects. Springer-Verlag, 1993.
[20]G. Li, “On pole and zero sensitivity of linear systems.,” IEEE Trans. Circuits Syst. I Fundam. Theory Appl., vol. 44, no. 7, pp. 583–590, 1997.
[21]H.-J. Ko, “The sparse normal-form realization with minimal zero sensitivity measure for finite word-length IIR digital filter implementations,” Submitt. to IEEE Trans. Signal Process., 2015.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top