跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.107) 您好!臺灣時間:2025/12/18 06:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:鄭育淵
研究生(外文):Yu-Yuan Cheng
論文名稱:基於遞迴式奇異值分解之線上模糊極限學習機
論文名稱(外文):Online Fuzzy Extreme Learning Machine Based on Recursive Singular Value Decomposition
指導教授:歐陽振森
指導教授(外文):Chen-Sen Ouyang
學位類別:碩士
校院名稱:義守大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:中文
論文頁數:48
中文關鍵詞:極限學習機類神經網路模糊系統模糊推論系統遞迴式奇異值分解線上學習模糊極限學習機
外文關鍵詞:extreme learning machineartificial neural networkfuzzy systemfuzzy inference systemrecursive singular value decompositiononline learningfuzzy extreme learning machine
相關次數:
  • 被引用被引用:0
  • 點閱點閱:288
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本研究提出一種基於遞迴式奇異值分解之線上模糊極限學習機,用以改良原始模糊極限學習機,使之適用於解決分類或迴歸建模中之線上學習問題。與原始模糊極限學習機相同,本方法中隱藏層模糊歸屬函數之相關權重乃是透過隨機給值的方式來設定。然而,本方法使用遞迴式奇異值分解取代原先摩爾彭洛斯廣義逆矩陣,用以針對逐筆輸入資料求出當時之最佳輸出層權重,因此適用於線上學習。實驗結果顯示,相較於原始模糊極限學習機,本方法可進行線上學習,並可達到一致的建模準確率。此外,本方法較他人之線上循序學習演算法更具有較佳之建模準確率與穩定性。
In this study, we propose an online fuzzy extreme learning machine based on the recursive singular value decomposition for improving the fuzzy extreme learning machine, and therefore making it applicable for solving online learning problems in classification or regression modeling. Like the original fuzzy extreme learning machine, our approach randomly assigns values to weights of fuzzy membership functions in the hidden layer. However, the Moore-Penrose pseudoinverse is replaced with the recursive singular value decomposition for calculating the optimal weights corresponding to the output layer. Compared with the original fuzzy extreme learning machine, our approach is applicable for the online learning of classification or regression modeling and produces the same modeling accuracy. Moreover, our approach possesses the better modeling accuracy and stability than the other approach, namely, online sequential learning algorithm.
第一章 緒論 1
一、研究背景與動機 1
二、研究目的 4
第二章 文獻探討 5
一、極限學習機 5
二、隱藏節點數的調整 6
三、隱藏層節點的改良 7
四、線上學習演算法 8
五、改善解釋性與推廣性 9
第三章 研究方法 13
第四章 實驗結果 17
一、資料集介紹 17
(一) 迴歸資料集介紹 17
(二) 分類資料集介紹 21
(三) 實驗流程 24
二、實驗結果 25
(一) 迴歸資料集實驗結果 26
(二) 分類資料集實驗結果 31
(三) 實驗總結 36
結論與未來展望 37
參考文獻 38
[1]G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” 2004 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985-990, 2004.
[2]S. Tamura and M. Tateishi, “Capabilities of a four-layered feedforward neural network: four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 251-255, 1997.
[3]G.-B. Huang, “Learning capability and storage capacity of two-hidden-layer feedforward networks,” IEEE Transactions on Neural Networks, vol. 14, no. 2, pp. 274-281, 2003.
[4]G.-B. Huang, “Real-time learning capability of neural networks,” IEEE Journals & Magazines, vol.17, no. 4, pp. 863-878, 2006.
[5]P. L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525-536, Mar. 1998.
[6]S.-Y. Wong, K.-S. Yap, H.-J. Yap, S.-C. Tan, and S.-W. Chang, “On Equivalence of FIS and ELM for Interpretable Rule-Based Knowledge Representation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp.1417-1430, 2015.
[7]L. A. Zadeh, “Outline of a new approach to the analysis of complex systems and decision processes,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. SMC-3, no. 1, pp. 28–44, Jan. 1973.
[8]L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning—I,” Information Sciences, vol. 8, no. 3, pp. 199–249, 1975.
[9]N.-Y. Liang, G.-B. Huang, P. Saratchandran, and N. Sundararajan, “A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, Nov. 2006.
[10]E. K. P. Chong and S. H. Z ̇ak, An Introduction to Optimization. New York: Wiley, 2001.
[11]Y. Miche, A. Sorjamaa, P. Bas, O. Simula, C. Jutten, and A. Lendasse, “OP-ELM: Optimally Pruned Extreme Learning Machine,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 158-162, Jun. 2010.
[12]M.-R. Zhao, J.-M. Zhang, and F. Han, “An improved extreme learning machine with adaptive growth of hidden nodes based on particle swarm optimization,” 2014 International Joint Conference on Neural Networks, pp. 886-890, Jul. 2014.
[13]Y.-G. Wang, F.-L. Cao, and Y.-B. Yuan, “A study on effectiveness of extreme learning machine,” Neurocomputing, vol. 74, no. 16, pp. 2483-2490, 2011.
[14]L. A. Zadeh, “Fuzzy sets,” Information Control, vol. 8, pp. 338-353, 1965
[15]G.-B. Huang, and L. Chen, “Convex incremental extreme learning machine,” Neurocomputing, vol. 70, no. 16, pp. 3056-3062, 2007.
[16]H.-J. Rong, Y.-S. Ong, A.-H. Tan, and Z. Zhu, “A fast pruned-extreme learning machine for classification problem,” Neurocomputing, vol. 72, no. 1, pp. 359-366, 2008.
[17]O. L. Mangasarian and W. H. Wolberg: “Cancer diagnosis via linear programming”, SIAM News, vol. 23, no. 5, pp. 1 & 18, Sep. 1990.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊