(44.192.70.216) 您好!臺灣時間:2021/05/09 19:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王子晉
研究生(外文):Zih-Jin Wang
論文名稱:容錯類神經網路之預測誤差研究
論文名稱(外文):A Study on Prediction Error for Fault Tolerant Neural Networks
指導教授:沈培輝
指導教授(外文):John Sum
學位類別:碩士
校院名稱:國立中興大學
系所名稱:電子商務研究所
學門:商業及管理學門
學類:一般商業學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:英文
論文頁數:41
中文關鍵詞:regularizerexpected MPE幅狀基底函數網路multiplicative weight noisemultiple nodes fault
外文關鍵詞:regularizerexpected MPEradial basis function networkmultiplicative weight noisemultiple nodes fault
相關次數:
  • 被引用被引用:0
  • 點閱點閱:102
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本文針對類神經網路應用regularizer訓練方式得出之expected MPE進行研究。研究利用結合式regularizer及單一regularizer訓練幅狀基底函數網路對於multiplicative weight noise與multiple nodes fault的容錯能力。對於每一種訓練方式,推導出其最適權重及對應假定的誤差情況下所計算出的expected MPE。結果顯示,在訓練資料足夠大的條件下,幅狀基底函數網路所訓練出的最適權重及expected MPE均存在有一般的型態。最後,建立一般化的expected MPE以簡化不同regularizer所需的計算。
In this thesis, the expected mean prediction errors (MPEs) of applying regularizer training approaches on neural network are investigated. This research exploits combined regularizers and single regularizer to train the fault tolerance ability of a radial basis function (RBF) network for multiplicative weight noise and multiple nodes fault. For each approach, the optimum weight vector and its corresponding MPE equations for assumed faulty conditions are derived. Results indicate that if the number of training data is large enough, there exist general forms for both the optimum weight vector and the expected MPE equation of RBF network. Finally, a generalized MPE is formulated for simplifying the calculation of employing different regularizers considering multiplicative weight noise and multiple nodes fault.
TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION………………………………………………1
1.1 Research Motivation…………………………………………1
1.2 Research Objective…………………………………………2
1.3 Research Structure…………………………………………3
CHAPTER 2 BACKGROUND…………………………………………………4
2.1 Network Model………………………………………………………4
2.2 Radial Basis Function Model……………………………………4
2.3 Fault Model…………………………………………………………6
2.3.1 Multiplicative Weight Noise…………………………………6
2.3.2 Multiple Nodes Fault…………………………………………6
CHAPTER 3 SINGLE REGULARIZER APPROACH……………………………8
3.1 Single Regularizer Training……………………………………9
3.1.1 Adding Explicit Regularizer…………………………………9
3.1.2 Adding Weight Decay Regularizer…………………………10
3.1.3 Adding Multiple Nodes Fault Regularizer………………11
3.2 Mean Prediction Error without Fault………………………12
3.3 Mean Prediction Error with Multiplicative Weight Noise……………………………………………………………………14
3.3.1 Adding Explicit Regularizer………………………………14
3.3.2 Adding Weight Decay Regularizer…………………………15
3.4 Mean Prediction Error with Multiple Nodes Fault………17
3.4.1 Adding Multiple Nodes Fault Regularizer………………17
3.4.2 Adding Weight Decay Regularizer…………………………20
CHAPTER 4 COMBINED REGULARIZER APPROACH………………………23
4.1 Combined Regularizers Training………………………………23
4.1.1 Combine Weight Decay with Explicit Regularizer………23
4.1.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………24
4.2 Mean Prediction Error with Multiplicative Weight Noise……………………………………………………………………25
4.2.1 Combine Weight Decay with Explicit Regularizer……………………………………………………………25
4.2.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………28
4.3 Mean Prediction Error with Multiple Nodes Fault……………………………………………………………………31
4.3.1 Combine Weight Decay with Explicit Regularizer……………………………………………………………31
4.3.2 Combine Weight Decay with Multiple Nodes Fault Regularizer……………………………………………………………33
CHAPTER 5 DISCUSSION AND CONCLUSION……………………………36
5.1 Discussion…………………………………………………………36
5.2 Conclusion…………………………………………………………38
REFERENCES………………………………………………………………40

LIST OF TABLE

Table 5.1 Regularizers and The Fault-Related Terms……………………………………………………………………39

LIST OF FIGURES

Figure 2.1 Radial Basis Function Architecture……………………………………………………………5
Figure 2.3 Multiplicative Weight Noise corrupted Radial Basis Function Network………………………………………………7
Figure 2.4 Multiple Nodes Fault corrupted Radial Basis Function Network………………………………………………………7
[1]M. Chester, Neural networks: a tutorial: Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1993.
[2]P. Chandra and Y. Singh, "Fault tolerance of feedforward artificial neural networks-a framework of study," in Proc. Int. Joint Conf. Neural Netw., vol.1, pp. 489-494, 2003.
[3]C. T. Chiu, K. Mehrotra, C. K. Mohan, and S. Ranka, "Modifying training algorithms for improved fault tolerance," in Proc. Int. Conf. Neural Netw.,vol.4, pp. 333-338,1994.
[4]J. E. Moody, "Note on generalization, regularization and architecture selection in nonlinear learning systems," in Proc. 1st IEEE-SP Workshop Neural Netw. Signal Process., pp. 1-10, 1991.
[5]C. Neti, M. H. Schneider, and E. D. Young, "Maximally fault-tolerant neural networks and nonlinear programming," in Proc. Int. Joint Conf. Neural Netw., vol.2, pp. 483-496, 1990.
[6]X. Parra and A. Catala, "Fault tolerance in the learning algorithm of radial basis function networks," in Proc. Int. Joint Conf. Neural Netw., vol. 3, pp. 527-532, 2000.
[7]C. H. Sequin and R. D. Clay, "Fault tolerance in artificial neural networks," in Proc. Int. Joint Conf. Neural Netw.vol.1, pp. 703-708, 1990.
[8]H. Takase, T. Shinogi, T. Hayashi, and H. Kita, "Evaluation function for fault tolerant multi-layer neural networks," in Proc. IEEE-INNS-ENNS Int. Joint Conf. Neural Netw.,vol.3, pp. 521-526, 2000.
[9]Z. H. Zhou, S. F. Chen, and Z. Q. Chen, "Improving tolerance of neural networks against multi-node open fault," in Proc. Int. Joint Conf. Neural Netw., vol. 3, pp. 1687-1692, 2001.
[10]J. L. Bernier, J. Ortega, M. M. Rodriguez, I. Rojas, and A. Prieto, "An accurate measure for multilayer perceptron tolerance to weight deviations," Neural Processing Letters, vol. 10, pp. 121-130, 1999.
[11]J. L. Bernier, J. Ortega, I. Rojas, E. Ros, and A. Prieto, "Obtaining fault tolerant multilayer perceptrons using an explicit regularization," Neural Processing Letters, vol. 12, pp. 107-113, 2000.
[12]T. R. Damarla and P. K. Bhagat, "Fault tolerance of neural networks," IEEE Southeastcon''89. Proceedings. Energy and Information Technologies in the Southeast., vol.1, pp. 328-331, 1989.
[13]S. Himavathi, D. Anitha, and A. Muthuramalingam, "Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization," IEEE Transactions on Neural Networks, vol. 18, pp. 880-888, 2007.
[14]A. Krogh and J. A. Hertz, "A simple weight decay can improve generalization," Advances in neural information processing systems, vol.4, pp. 950-957, 1992.
[15]C. S. Leung and J. P. F. Sum, "A Fault-Tolerant Regularizer for RBF Networks," IEEE Transactions on Neural Networks, vol. 19, pp. 493-507, 2008.
[16]D. S. Phatak and I. Koren, "Complete and partial fault tolerance of feedforward neural nets," IEEE Transactions on Neural Networks, vol. 6, pp. 446-456, 1995.
[17]B. E. Segee and M. J. Carter, "Comparative fault tolerance of parallel distributed processing networks," IEEE Transactions on Computers, vol. 43, pp. 1323-1329, 1994.
[18]M. Stevenson, R. Winter, and B. Widrow, "Sensitivity of feedforward neural networks to weight errors," IEEE Transactions on Neural Networks, vol. 1, pp. 71-80, 1990.
[19]J. P. F. Sum, C. S. Leung, and K. I. J. Ho, "On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise," IEEE Transactions on Neural Networks, vol. 20, pp. 124-138, 2009.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔