(3.235.245.219) 您好!臺灣時間:2021/05/07 22:41
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:廖時慧
研究生(外文):Shih-hui Liao
論文名稱:加法幅基函數網路之研究
論文名稱(外文):Study on Additive Generalized Radial Basis Function Networks
指導教授:余祥華謝哲光謝哲光引用關係
指導教授(外文):Shiang-Hwua YuJer-Guang Hsieh
學位類別:碩士
校院名稱:國立中山大學
系所名稱:電機工程學系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:英文
論文頁數:69
中文關鍵詞:加法模型加法幅基函數網路幅基函數網路
外文關鍵詞:Generalized Radial Basis Function NetworkAdditive ModelAdditive Generalized Radial Basis Function Network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:123
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:0
在本論文裡,針對一般非線性回歸問題,我們提出一個新的學習機:加法幅基函數網路。此學習機結合了經常使用在一般機器學習問題的幅基函數網路和半參數回歸問題裡使用的加法模型。
在統計回歸理論裡,於線性模型和非參數模型之間加法模型是一個好的折衷模型。為能處理更一般化的資料,我們將加法模型崁入幅基函數網路的輸出層而形成加法幅基函數網路。在本論文中,我們也將會提出簡單的權重更新規則。
我們會提出一些範例用來比較一般幅基函數網路和我們所提出的加法幅基函數網路的模擬結果。由模擬結果顯示,在加法輸出層中,適當選擇隱藏節點和核平滑器的頻寬,加法幅基函數網路會表現得比一般幅基函數網路好。此外,在所給定的學習問題中,我們發現在同樣準確度的標準下,加法幅基函數網路通常比幅基函數網路需要較少的隱藏節點。
In this thesis, we propose a new class of learning models, namely the additive generalized radial basis function networks (AGRBFNs), for general nonlinear regression problems. This class of learning machines combines the generalized radial basis function networks (GRBFNs) commonly used in general machine learning problems and the additive models (AMs) frequently encountered in semiparametric regression problems. In statistical regression theory, AM is a good compromise between the linear model and the nonparametric model. In order for more general network structure hoping to address more general data sets, the AMs are embedded in the output layer of the GRBFNs to form the AGRBFNs. Simple weights updating rules based on incremental gradient descent will be derived. Several illustrative examples are provided to compare the performances for the classical GRBFNs and the proposed AGRBFNs. Simulation results show that upon proper selection of the hidden nodes and the bandwidth of the kernel smoother used in additive output layer, AGRBFNs can give better fits than the classical GRBFNs. Furthermore, for the given learning problem, AGRBFNs usually need fewer hidden nodes than those of GRBFNs for the same level of accuracy.
誌謝 i
摘要 ii
Abstract iii
List of Figures and Tables iv
Glossary of Symbols vi
Glossary of Abbreviations vii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Brief Sketch of the Contents 4
Chapter 2 Generalized Radial Basis Function Networks 7
2.1 Introduction 8
2.2 Training by Incremental Gradient Descent Algorithm 11
2.3 Subset Selection by K-means Algorithm 13
Chapter 3 Additive Models 16
3.1 Introduction 17
3.2 Backfitting Method for Additive Models 20
3.3 Univariate Nadaraya-Watson Estimators 21
3.4 Degrees of Freedom of a Smoother 25
Chapter 4 Additive Generalized Radial Basis Function Networks 27
4.1 Introduction 28
4.2 Training by Steepest Descent Algorithm 31
4.3 Bandwidth Selection 34
4.4 Degrees of Freedom 36
Chapter 5 Illustrative Examples 38
Chapter 6 Conclusion and Discussion 53
References 56
[Bor.1]A. G. Bors, “Introduction of the radial basis function networks,” Online Symposium for Electronics Engineers, vol. 1, pp. 1-7, 2001.
[Chu.1]C. C. Chuang, J. T. Jeng, and P. T. Lin, “Annealing robust radial basis function networks for function approximation with outliers,” Neurocomputing, vol. 56, pp. 123-139, 2004.
[Far.1]J. J. Faraway, Extending the Linear Model with R Models. CRC Press, Boca Raton, Florida, 2006.
[Hart.1]E. J. Hartman, J. D. Keeler, and J. M. Kowalski, “Layered neural networks with Gaussian hidden units as universal approximations,” Neural Computation, vol. 2, no. 2, pp. 210-215, 1990.
[Härd.1]W. Härdle, M. Muller, S. Sperlich, and A. Werwatz, Nonparametric and Semiparametric Models. Springer, Berlin, Germany, 2004.
[Has.1]T. J. Hastie and R. J. Tibshirani, Generalized Additive Models. Chapman & Hall, London, 1990.
[Hsi.1]J. G. Hsieh, Y. L. Lin, and J. H. Jeng, “Preliminary study on Wilcoxon learning machines,” IEEE Transactions on Neural Networks, vol. 19, no. 2, pp. 201-211, 2008.
[Lin.1]Y. L. Lin, SVM-based Robust Template Design of Cellular Neural Networks and Primary Study of Wilcoxon Learning Machines. PhD Thesis, Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, 2006.
[Kec.1]V. Kecman, Learning and Soft Computing. MIT Press, Cambridge, Massachusetts, 2001.
[Mam.1]E. Mammen and B. U. Park, “A simple smooth backfitting method for additive model,” Annals of Statistics, vol. 34, no. 5, pp. 2252-2271, 2006.
[Mon.1]D. C. Montgomery, E. A. Peck, and G. G. Vining, Introduction to Linear Regression Analysis. 4th ed., Wiley, Hoboken, New Jersey, 2006.
[Mor.1]V. A. Morozov, Regularization Methods for Ill-posed Problems. CRC Press, Boca Raton, Florida, 1993.
[Nad.1]E. A. Nadaraya, “On estimating regression,” Theory of Probability and its Applications, vol. 9, pp. 141-142, 1964.
[Park.1]J. Park and I. W. Sandberg, “Universal approximation using radial basis function networks,” Neural Computation, vol. 3, no. 2, pp. 246-257, 1991.
[Pog.1]T. Poggio and F. Girosi, “A theory of networks for approximation and learning,” A.I. Memo No. 1140 MIT, Cambridge, 1989.
[Pog.2]T. Poggio and F. Girosi, “Networks and the best approximation property,” A.I. Memo No. 1164 MIT, Cambridge, 1989.
[Pog.3]T. Poggio and F. Girosi, “Regularization algorithms for learning that are equivalent to multilayer networks,” Science, vol. 247, no. 4945, pp. 978-982, 1990.
[Pog.4]T. Poggio and F. Girosi, “Networks for approximation and learning,” Proceedings of the IEEE, vol. 78, pp. 1481-1497, 1990.
[Pog.5]T. Poggio and F. Girosi, “Extensions of a theory of networks for approximation and learning: Dimensionality reduction and clustering,” A.I. Memo No. 1167 MIT, Cambridge, 1990.
[Rum.1]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagation error,” Nature, vol. 323, pp. 533-536, 1986.
[Rup.1]D. Ruppert, M. P. Wand, and R. J. Carroll, Semiparametric Regression. Cambridge University Press, New York, 2003.
[Sto.1]C. Stone, “Additive regression and other nonparametric models,” Annals of Statistics, vol. 13, no. 2, pp. 689-705, 1985.
[Sun.1]N. Sundararajan, P. Saratchandran, and Y. W. Lu, Radial Basis Function Neural Networks with Sequential Learning. World Scientific, Singapore, 1999.
[Tik.1]A. N. Tikhonov, “On solving incorrectly posed problems and method of regularization,” Doklady Akademii Nauk USSR, vol. 151, pp. 501-504, 1963.
[Tik.2]A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems. V. H. Winston, Washington, D. C., 1977.

[Wat.1]G. S. Watson, “Smooth regression analysis,” Sankhya, Series A, vol. 26, pp. 359-372, 1964.
[Wer.1]P. J. Werbors, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD Thesis, Harvard University, Cambridge, 1974.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔