跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.173) 您好!臺灣時間:2024/12/07 12:22
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林君玲
研究生(外文):Chun-Ling Lin
論文名稱:運用蜂群智慧於類神經網路參數最佳化的研究
論文名稱(外文):Neural Network Parameters Optimization Using Swarm Intelligence
指導教授:孫宗瀛孫宗瀛引用關係
指導教授(外文):Tsung-Ying Sun
學位類別:碩士
校院名稱:國立東華大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:英文
論文頁數:82
中文關鍵詞:隱藏訊號分離粒子群最佳化演算法次經驗演算法則類神經網路自成長放射性半徑式函數網路學習率調整前向類神經網路
外文關鍵詞:learning ratefeed-forward neural networkblind source separationradial basis function neural network.meta-heuristic algorithmsparticle swarm optimizationArtificial neural networks
相關次數:
  • 被引用被引用:0
  • 點閱點閱:645
  • 評分評分:
  • 下載下載:80
  • 收藏至我的研究室書目清單書目收藏:1
類神經網路,又名平行分散式處理器、適應系統、自我組織系統、神經計算機、連接機構等,是由人類心智和腦部活動所發展出來的一種模型。就網路的架構而言,類神經網路是由許多簡單且相互連結的處理元件組成;就網路的功能而言,則是一種啟發自生物模型的新型態的資料處理與計算方式。
科學界的許多研究者期望能夠設計像人類大腦一樣,具有學習和智慧的類神經網路,如此許多複雜難解或有生命危險等高難度的工作,就可交給此等智慧型的機制來做,而不需要藉由人工來完成。但是,在訓練類神經網路時,需要妥適設定的參數很多,例如學習率、網路節點數…等。這些參數不只影響整體的效能,也會產生學習的過程的成本太高的問題。
本論文利用次經驗演算法則來解決類神經網路參數最佳化的問題,比較每種次經驗演算法的特色和缺點之後,本研究採用源於蜂群智慧的粒子群最佳化演算法,研究類神經網路參數最佳化的問題。粒子群最佳化演算法具有類似生物群體的社會約束及自我認知的行為模式,其優點為數學模式簡單容易實現、求解計算快速且不需要設定太多參數。本研究利用粒子群最佳化演算法解決類神經網路在學習過程涉及的參數最佳化問題,此機制分別應用在處理隱藏訊號分離前向類神經網路學習率調整最佳化的問題,及自成長放射性半徑式函數網路在處理函數估計的架構最佳化問題。經由各種模擬實驗可以驗證本論文提出的演算法,在類神經網路參數最佳化的強健性與穩定性。
Artificial neural networks are also known as parallel distributed processors, adaptive systems, self-organizing systems, neurocomputers, connectionism etc. It refers to human mind and brain activity and has developed as a model. The neural networks consist of many simple processing elements with connections. Neural networks are a new-type of data processing and computing methodology enlightened by biological module.
Many researchers in science expect the neural networks can be with its own intelligence and learning ability as human brain. Once this neural network can be developed, most complicated problems and highly hazardous occupation can be assigned to these intelligent mechanisms without manual operation. During neural network training processes, there are many parameters to be set, such as learning rate, hidden node numbers, etc. These parameters not only influence the efficiency of the network directly but also spend much computational consumption for finding an optimal combination.
In this thesis, detail comparisons with meta-heuristic algorithms have been made for choosing a better algorithm to solve parameters optimization problems for neural networks. According to their’s specification, the particle swarm optimization (PSO) has been chosen and it is more suitable than other meta-heuristic algorithms. PSO algorithm’s behavior has restraining and self-cognition of society that is similar to biological colonies. The advantages of PSO are simple in concept, easy to implement, computationally efficient and only few parameters need to be adjustment.
In this thesis, the PSO was applied on feed-forward neural network (FFNN) to decide a suitable learning rate for BSS problem and radial basis function neural network (RBFNN) to decide a suitable hidden node number. The experiment results show that, compared with other related methods, the proposed algorithm has higher robustness and efficiency for parameters adjustment of neural network.
摘要 Abstract
致謝
List of Figures
List of Tables
Chapter 1 Introduction
1.1 Preface
1.2 Motivations
1.3 PaperReview
1.3.1 Genetic Algorithm
1.3.2 Simulated Annealing 1.3.3 Ant Colony Optimization 1.3.4 Particle Swarm Optimization
1.3.5 Comparisons of Meta-heuristic Algorithms
1.4 Methodology
1.5 Organizations
Chapter 2 Theory
2.1A Review of Neural Networks
2.2Particle Swarm Optimization
2.2.1 Social Network Structure
2.2.2 Particle Swarm Optimization Algorithm
2.2.3 PSO System Parameters
2.2.4 Modified Versions of PSO
2.2.5 Summary
Chapter 3 PSO-based Learning Rate Adjustment for BSS
3.1Blind Source Separation Problems
3.2 Related Works
3.3 Dependence Measure
3.4PSO for Learning Rate Adjustment
3.5 Improved Decision Making
3.6Experiment Results
Chapter 4 PSO-based Self-growing RBF Neural Network
4.1 RBF Network
4.2The Self-growing Training Algorithm
4.3PSO-based Cluster Distance Factor Searching
4.4Numerical Simulation
Chapter 5 Conclusions and Future work
5.1 Conclusions
5.2 Future Works
Reference
作者簡歷
1.R. K. Belew and L. B. Booker, eds., Proceeding of the Four International Conference on Genetic Algorithms, Morgan Kaufmann, 1991.
2.J. Holland, Adaptation in Natural and Artificial systems, Ann Arbor, MI: University of Michigan Press, 1975.
3.T. Back, U. Hammel and H.P. Schwefel, “Evolutionary computation: comments on the history and current state,” IEEE Trans. on Evolutionary Computation, vol. 1, no. 1, pp. 3-17, Apr. 1997.
4.S. Chen, Y. Wu and B. L. Luk, “Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks,” IEEE Trans. on Neural Networks, vol. 10, no. 5, pp. 1239-1243. 1999.
5.S. Aiguo and L. Jiren, “Evolving Gaussian RBF network for nonlinear time series modeling and prediction,” IEEE Electronics Letters, vol. 34, no. 12, pp. 1241-1243, June 1998.
6.B. Yunfei and L. Zhang, “Genetic algorithm based self-growing training for RBF neural Network,” IEEE Neural Networks, vol. 1, pp. 840-845, 2002.
7.D. J. Montana and L. Davis, “Training feedforward neural networks using genetic algorithms,” in Proc. of the International Joint Conference on Artificial Intelligence, pp. 762-767, 1989
8.S. Kirkpatrick, C. D. Gelatt, Jr. and M.P. Vecchi, “Optimization by Simulated Annealing,” Science, vol. 220, no. 4598, pp. 671-680, 1983.
9.E. Harts and K. Kost, Simulated annealing and Bolbman machines, JOHN WILEY & SONS, New York, 1989.
10.C. S. Koh , Song Yop Hahn and O. A. Mohammed, “Detection of magnetic body using artificial neural network with modified simulated annealing,” IEEE Transactions on Magnetics, vol. 30 no. 5 pp.3644-3647, 1994
11.Y. L. Mao, G. Z. Zhang, B. Zhu and M. Zhou, “Chaotic simulated annealing neural network with decaying chaotic noise and its application in economic load dispatch of power systems,” in Proc. of 2004 IEEE International Conference on Information Reuse and Integration, pp. 536 -542, 2004
12.M. Dorigo and T. Stutzle, Ant Colony Optimization. MIT Press, Cambridge, MA, 2004.
13.B. Bilchev and I. C. Parmee, “The ant colony metaphor for searching continuous design spaces,” in Proc. of the AISB Workshop on Evolutionary Computation, ser. LNCS, vol. 993, pp. 25-39, 1995.
14.N. Monmarch’e, G. Venturini and M. Slimane, “On how Pachycondyla apicalis ants suggest a new search algorithm,” Future Generation Computer Systems, vol. 16, Issue. 8, pp. 937-946, June, 2000.
15.J. Dr’eo and P. Siarry, “A new ant colony algorithm using the heterarchical concept aimed at optimization of multiminima continuous functions,” in Proc. of ANTS 2002, ser. LNCS, M. Dorigo et al., Eds., vol. 2463. Springer Verlag, Berlin, Germany, pp. 216-221, 2002.
16.K. Socha, “Extended ACO for continuous and mixedvariable optimization,” in Proc. of ANTS 2004, ser. LNCS, M. Dorigo et al., Eds. Springer Verlag, Berlin, Germany, pp.25-36, 2004.
17.C. Blum and K. Socha, “Training feed-forward neural networks with ant colony optimization: An application to pattern classification,” in Proc. of Hybrid Intelligent Systems Conference, HIS-2005, Rio de Janeiro, Brazil, Nov. 6-9, pp. 6, 2005.
18.J. Kennedy and R. C. Eberhart, “Particle swarm optimization”, in Proc. of IEEE International Conference on neural networks, Perth, Australia, pp. 1942-1948, 1995
19.V.G. Gudise and G.K. Venayagamoorthy, “Comparison of Particle Swarm Optimization and Backpropagation as Training Algorithms for Neural Networks,” in Proc. of the IEEE Symposium on Swarm Intelligence, Indianapolis, IN, USA, pp 110-117, April 24-26, 2003
20.W. Zha and G. K. Venayagamoorthy,“ Neural networks based non-uniform scalar quantizer design with particle swarm optimization, ” in Proc. of the IEEE Swarm Intelligence Symposium, 2005. SIS 2005, vol. 8, no. 10, pp. 143-148, June 2005.
21.A. Kazemi and C.K. Mohan, “Training Feedforward Neural Networks using Multi-Phase Particle Swarm Optimization,” in Proc. the Ninth International Conference on Neural Information Processing, vol. 5, pp. 2615-2619, 2002.
22.V. G. Gudise and G. K. Venayagamoorthy, “Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks,” in Proc. of IEEE Swarm Intelligence Symposium, vol. 24, no. 26, pp. 110-117, 2003.
23.T. Y. Sun, S. T. Hsieh and C. W. Lin “Particle Swarm Optimization Incorporated with Disturbance for Improving the Efficiency of Macrocell Overlap Removal and Placement,” in Proc. of the 2005 International Conference on Artificial Intelligence (ICAI’05), pp. 122-125, June 2005
24.張孝德和蘇木春, 機器學習類神經網路、模糊系統以及基因演算法則, 全華, 1997.
25.J. Kennedy, Small worlds and Mega-Minds, “Effects of Neighborhood Topology on Particle Swarm Performance,” IEEE congress on Evolutionary Computation, vol. 3, pp. 1937-1938, 1999
26.C. L. Lin, S. T. Hsieh, T. Y. Sun and C. C. Liu, “PSO-based learning rate adjustment for blind source separation,” in Proc. of International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS), pp. 181-184, Dec. 2005.
27.F van den Bergh and AP Engelbrecht, “Cooperative Learning in Neural Networks using Particle Swarm Optimizers,” South African Computer Journal, no. 26, pp. 84-90, 2000
28.A. Cichocki and S. I Amari, Adaptive Blind Signal and Image Processing, Wiley, 2002
29.J. B. Anthony, and J. S. Terrence, “Learning the higher-order structure of a natural sound,” Network: Computation in Neural Systems, pp. 261-266, 1996.
30.J. B. Anthony, and J. S. Terrence, “An information_maximisation approach to blind separation and blind deconvolution,” Neural Computation INC-9501, vol. 7, no. 6, pp. 1129-1159, 1995.
31.S. Amari, “Theory of adaptive patter classifiers,” IEEE Tran. on Electr. Comput, vol. EC-16, pp. 299-307, 1967.
32.S. C. Dougles and A. Cichocki, “Adaptive step size techniques for decorrelation and blind source separation ,” in Proc. 32nd Asilomar Conf. Signals, Systems, Computers, vol. 2, pp. 1991-1995, Pacific Grove, CA, Nov. 1998
33.S. T. Lou and X. D. Zhang, “Fuzzy-Based Learning Rate Determination for Blind Source Separation,” IEEE Transactions on Fuzzy System, vol. 11, no.3, pp. 375-383, June 2003
34.P. Comon, “Independent component analysis, a new concept,” IEEE Trans. Signal Processing, vol. 36, pp. 287-314, 1994
35.J. F. Cardoso and B. H. Laheld, “Equivariant adaptive source separation,” IEEE Trans. Signal Processing, vol. 44, pp. 3017-3030, Dec. 1996
36.S. Amari, A. Cichocki and H.-H. Yang, “A new learning algorithm for blind signal separation,” in Advanced in Neural Information Processing System, Cambridge, MA: MIT press, vol. 8, pp. 752-763, 1996
37.S. Cruces, A. Cichocki and L. Castedo, “An iterative inversion approach to blind source separation,” IEEE Trans. Neural Networks, vol. 11, pp. 1423-1437, Nov. 2000.
38.A. Cichocki and R. Unbehauen, “Robust neural networks with on-line learning for blind identification and blind separation of sources,” IEEE Trans. Circuits Syst. I, vol. 43, pp. 894-906, Oct. 1996.
39.S. Amari, “Theory of adaptive pattern classifiers,” IEEE Trans. Electr.Comput., vol. EC-16, pp. 299-307, 1967.
40.N. Murata, K. Müller, A. Ziehe and S. Amari, “Adaptive on-line learning in changing environments,” in Advances in Neural Information Processing Systems 9. Cambridge, MA: MIT Press, pp. 599-605, 1997.
41.S. C. Douglas and A. Cichocki, “Adaptive step seze techniques for decorrelation and blind source separation,” in Proc. 32nd Asilomar Conf. Signals, Systems, Computers, vol. 2, Pacific Grove, CA, Nov. pp. 1191-1195, 1998.
42.S. Amari and A. Cichocki, “Adaptive blind signal processing—Neural network approaches,” Proc. IEEE, vol. 86, pp. 2026-2048, 1998.
43.C. L. Lin, S. T. Hsieh, T. Y. Sun and C. C. Liu, “PSO-based learning rate adjustment for blind source separation,” in Proc. of International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS), pp. 181-184 , Dec. 2005.
44.D.S. Broomhead and D. Lowe, “Multivariable functional interpolation and adaptive networks,” Complex Systems, vol. 2, pp. 321-355, 1988.
45.D. Lowe, “Adaptive radial basis function nonlinearities, and the problem of generalization,” In IEE International Conference on Artifical Neural Networks, pp. 171-175, London, UK, 1989.
46.J. A. S. Freeman and D. Saad, “Learning and generalization in radial basis function networks,” Neural Computation, vol. 9, no. 7, pp. 1601-1622, 1995.
47.Y. Moddy and C. J. Darken, “Fast learning in network of locally tuned processing unites,” Neural computation, vol.1, pp. 281-294, 1989.
48.N. B. Karayiannis and G. W. Mi, “Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques,” IEEE Trans. on Neural Networks, vol. 8, no.6, pp. 1942-1506, Nov. 1997.
49.N. Zheng, Z. Zhang, G. Shi and Y. Qiao, “Self-creating and adaptive learning of RBF networks: merging soft-completion clustering algorithm with network growth technique,” International Joint Conference on Neural Networks (IJCNN’99), vol. 2, pp. 1131-1135, 1999.
50.S. Chen, “Nonlinear time series modeling and prediction using Gaussian RBF networks with enhances clustering and RLS learning,” Electronics Letters, vol. 31, no. 2, pp. 117-118, 1995.
51.A. Song and J. Lu, “Evolving Gaussian RBF network for nonlinear time series modeling and prediction,” Electronics Letters, vol. 34, no.12, pp. 1241-4243, 1988.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊