(44.192.70.216) 您好！臺灣時間：2021/05/09 19:28

### 詳目顯示:::

:
 Twitter

• 被引用:0
• 點閱:102
• 評分:
• 下載:0
• 書目收藏:0
 本文針對類神經網路應用regularizer訓練方式得出之expected MPE進行研究。研究利用結合式regularizer及單一regularizer訓練幅狀基底函數網路對於multiplicative weight noise與multiple nodes fault的容錯能力。對於每一種訓練方式，推導出其最適權重及對應假定的誤差情況下所計算出的expected MPE。結果顯示，在訓練資料足夠大的條件下，幅狀基底函數網路所訓練出的最適權重及expected MPE均存在有一般的型態。最後，建立一般化的expected MPE以簡化不同regularizer所需的計算。
 In this thesis, the expected mean prediction errors (MPEs) of applying regularizer training approaches on neural network are investigated. This research exploits combined regularizers and single regularizer to train the fault tolerance ability of a radial basis function (RBF) network for multiplicative weight noise and multiple nodes fault. For each approach, the optimum weight vector and its corresponding MPE equations for assumed faulty conditions are derived. Results indicate that if the number of training data is large enough, there exist general forms for both the optimum weight vector and the expected MPE equation of RBF network. Finally, a generalized MPE is formulated for simplifying the calculation of employing different regularizers considering multiplicative weight noise and multiple nodes fault.
 TABLE OF CONTENTSCHAPTER 1 INTRODUCTION………………………………………………11.1 Research Motivation…………………………………………11.2 Research Objective…………………………………………21.3 Research Structure…………………………………………3CHAPTER 2 BACKGROUND…………………………………………………42.1 Network Model………………………………………………………42.2 Radial Basis Function Model……………………………………42.3 Fault Model…………………………………………………………62.3.1 Multiplicative Weight Noise…………………………………62.3.2 Multiple Nodes Fault…………………………………………6CHAPTER 3 SINGLE REGULARIZER APPROACH……………………………83.1 Single Regularizer Training……………………………………93.1.1 Adding Explicit Regularizer…………………………………93.1.2 Adding Weight Decay Regularizer…………………………103.1.3 Adding Multiple Nodes Fault Regularizer………………113.2 Mean Prediction Error without Fault………………………123.3 Mean Prediction Error with Multiplicative Weight Noise……………………………………………………………………143.3.1 Adding Explicit Regularizer………………………………143.3.2 Adding Weight Decay Regularizer…………………………153.4 Mean Prediction Error with Multiple Nodes Fault………173.4.1 Adding Multiple Nodes Fault Regularizer………………173.4.2 Adding Weight Decay Regularizer…………………………20CHAPTER 4 COMBINED REGULARIZER APPROACH………………………234.1 Combined Regularizers Training………………………………234.1.1 Combine Weight Decay with Explicit Regularizer………234.1.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………244.2 Mean Prediction Error with Multiplicative Weight Noise……………………………………………………………………254.2.1 Combine Weight Decay with Explicit Regularizer……………………………………………………………254.2.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………284.3 Mean Prediction Error with Multiple Nodes Fault……………………………………………………………………314.3.1 Combine Weight Decay with Explicit Regularizer……………………………………………………………314.3.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………33CHAPTER 5 DISCUSSION AND CONCLUSION……………………………365.1 Discussion…………………………………………………………365.2 Conclusion…………………………………………………………38REFERENCES………………………………………………………………40LIST OF TABLETable 5.1 Regularizers and The Fault-Related Terms……………………………………………………………………39LIST OF FIGURESFigure 2.1 Radial Basis Function Architecture……………………………………………………………5Figure 2.3 Multiplicative Weight Noise corrupted Radial Basis Function Network………………………………………………7Figure 2.4 Multiple Nodes Fault corrupted Radial Basis Function Network………………………………………………………7
 [1]M. Chester, Neural networks: a tutorial: Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1993.[2]P. Chandra and Y. Singh, "Fault tolerance of feedforward artificial neural networks-a framework of study," in Proc. Int. Joint Conf. Neural Netw., vol.1, pp. 489-494, 2003.[3]C. T. Chiu, K. Mehrotra, C. K. Mohan, and S. Ranka, "Modifying training algorithms for improved fault tolerance," in Proc. Int. Conf. Neural Netw.,vol.4, pp. 333-338,1994.[4]J. E. Moody, "Note on generalization, regularization and architecture selection in nonlinear learning systems," in Proc. 1st IEEE-SP Workshop Neural Netw. Signal Process., pp. 1-10, 1991.[5]C. Neti, M. H. Schneider, and E. D. Young, "Maximally fault-tolerant neural networks and nonlinear programming," in Proc. Int. Joint Conf. Neural Netw., vol.2, pp. 483-496, 1990.[6]X. Parra and A. Catala, "Fault tolerance in the learning algorithm of radial basis function networks," in Proc. Int. Joint Conf. Neural Netw., vol. 3, pp. 527-532, 2000.[7]C. H. Sequin and R. D. Clay, "Fault tolerance in artificial neural networks," in Proc. Int. Joint Conf. Neural Netw.vol.1, pp. 703-708, 1990.[8]H. Takase, T. Shinogi, T. Hayashi, and H. Kita, "Evaluation function for fault tolerant multi-layer neural networks," in Proc. IEEE-INNS-ENNS Int. Joint Conf. Neural Netw.,vol.3, pp. 521-526, 2000.[9]Z. H. Zhou, S. F. Chen, and Z. Q. Chen, "Improving tolerance of neural networks against multi-node open fault," in Proc. Int. Joint Conf. Neural Netw., vol. 3, pp. 1687-1692, 2001.[10]J. L. Bernier, J. Ortega, M. M. Rodriguez, I. Rojas, and A. Prieto, "An accurate measure for multilayer perceptron tolerance to weight deviations," Neural Processing Letters, vol. 10, pp. 121-130, 1999.[11]J. L. Bernier, J. Ortega, I. Rojas, E. Ros, and A. Prieto, "Obtaining fault tolerant multilayer perceptrons using an explicit regularization," Neural Processing Letters, vol. 12, pp. 107-113, 2000.[12]T. R. Damarla and P. K. Bhagat, "Fault tolerance of neural networks," IEEE Southeastcon''89. Proceedings. Energy and Information Technologies in the Southeast., vol.1, pp. 328-331, 1989.[13]S. Himavathi, D. Anitha, and A. Muthuramalingam, "Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization," IEEE Transactions on Neural Networks, vol. 18, pp. 880-888, 2007.[14]A. Krogh and J. A. Hertz, "A simple weight decay can improve generalization," Advances in neural information processing systems, vol.4, pp. 950-957, 1992.[15]C. S. Leung and J. P. F. Sum, "A Fault-Tolerant Regularizer for RBF Networks," IEEE Transactions on Neural Networks, vol. 19, pp. 493-507, 2008.[16]D. S. Phatak and I. Koren, "Complete and partial fault tolerance of feedforward neural nets," IEEE Transactions on Neural Networks, vol. 6, pp. 446-456, 1995.[17]B. E. Segee and M. J. Carter, "Comparative fault tolerance of parallel distributed processing networks," IEEE Transactions on Computers, vol. 43, pp. 1323-1329, 1994.[18]M. Stevenson, R. Winter, and B. Widrow, "Sensitivity of feedforward neural networks to weight errors," IEEE Transactions on Neural Networks, vol. 1, pp. 71-80, 1990.[19]J. P. F. Sum, C. S. Leung, and K. I. J. Ho, "On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise," IEEE Transactions on Neural Networks, vol. 20, pp. 124-138, 2009.
 國圖紙本論文
 推文當script無法執行時可按︰推文 網路書籤當script無法執行時可按︰網路書籤 推薦當script無法執行時可按︰推薦 評分當script無法執行時可按︰評分 引用網址當script無法執行時可按︰引用網址 轉寄當script無法執行時可按︰轉寄

 無相關論文

 無相關期刊

 1 國小高年級學童對超商集點行為購買態度之研究 2 文化創意園區之發展模式：創意4P理論 3 P-拉普拉斯狄利克雷問題分枝曲線之完整分類.Ⅱ.推廣之非線性項 4 創意認知風格、衝突處理與事業成功:台灣創意創業家之實證研究 5 創意創業家的態樣與事業成功 6 期刊影響力之研究 7 探討蘭花產業持續合作之意願－以期望確認理論為基礎 8 台灣遊艇產業之競爭優勢 9 如果有件事是重要的 10 人力資本、機會辨識與企業績效：以網路事業為例 11 人力資本、社會資本與機會發現：以網路創業家創意為中介變數 12 商店促銷與來源國印象對衝動性購買的影響 13 國中兼任輔導教師授課節數、工作壓力與輔導自我效能之相關研究 14 雲林縣國民中小學校務評鑑決策歷程之研究 15 豬環狀病毒第二型 (PCV2) 非結構性蛋白Rep及ORF3之結構與功能分析

 簡易查詢 | 進階查詢 | 熱門排行 | 我的研究室