跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.62) 您好!臺灣時間:2025/11/17 05:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:楊世宏
論文名稱:具演化式結構學習能力之類神經網路及其預測之應用
論文名稱(外文):Neural Network with Evolutionary Structure Learning and Its Prediction Application
指導教授:陳永平陳永平引用關係
學位類別:博士
校院名稱:國立交通大學
系所名稱:電控工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:100
語文別:英文
論文頁數:104
中文關鍵詞:演化計算類神經網路結構學習
外文關鍵詞:evolutionary computationneural networkstructure learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:504
  • 評分評分:
  • 下載下載:66
  • 收藏至我的研究室書目清單書目收藏:2
本論文提出一以前饋式類神經網路輔助之灰色模型及其相關的線上參數學習與結構學習演算法,此模型採用一階單變數灰色模型來預測訊號,再使用前饋式類神經網路補償灰色模型的預測誤差,此外,本論文提出一線上批次訓練法來即時更新類神經網路的權重值,於是,此模型可以執行預測且持續地適應動態的訊號變化。為了有效地設計此模型的結構,本論文提出一種以神經元為基礎的結構學習,稱為共生結構學習演算法,來建立類神經網路的拓墣結構,此演算法首先建構一神經元族群,再由神經元族群建立類神經網路族群,由於神經元族群裡的神經元包含雙曲線正切與線性活化函數,此演算法能任意且輕易地發展串聯式網路與前饋式網路,此演算法進一步根據共生進化的概念,在神經元族群裡執行神經元交配與突變,其所發展的前饋式類神經網路輔助之灰色模型將執行訊號預測且持續地以線上批次訓練法調適模型於環境中。另一方面,本論文提出一種以網路為基礎的結構學習,稱為演化式建構與修剪演算法,用演化的方式結合建構與修剪的概念,來設計類神經網路的拓墣結構。此演算法從一群具有最簡單結構的類神經網路開始,即一群只有一顆連接單一輸入單元的隱藏層神經元的類神經網路,此演算法採用網路交配與突變來增加隱藏層神經元以及鏈結,用以提升類神經網路的訊號處理能力,此外,本論文提出一以叢集為基礎之修剪法用隨機的方式來除去不重要的神經元,也提出一以年齡為基礎之生存者選擇法來移除較老且可能具有複雜結構的類神經網路,接著引進新的且具有最簡單結構的類神經網路。數值模擬與實驗結果將展現所提出的方法在預測問題上的有效及可行性。
This dissertation proposes a feedforward-neural-network-aided grey model (FNAGM) and its related on-line parameter learning and structure learning algorithms. The FNAGM uses a first-order single variable grey model (GM(1,1)) to predict signal and adopts a feedforward neural network (NN) to compensate the prediction error of GM(1,1). Furthermore, an on-line batch training is proposed to update the weights of NN in real-time. Thus, FNAGM can precisely predict and adapt itself to the dynamical change of the signal. To design the structure of FNAGM efficiently, a neuron-based structure learning, called symbiotic structure learning algorithm (SSLA), is proposed to establish the topology of NN. The SSLA constructs a neuron population and then builds a network population from the neuron population, and it can arbitrarily develop cascade NNs and feedforward NNs in an easy way. Further, SSLA carries out neuron crossover and mutation on the neuron population according to the idea of symbiotic evolution. The evolved FNAGM is applied to predict the signal and continuously adapt itself to the environment by the on-line batch training. On the other hand, a network-based structure learning, called evolutionary constructive and pruning algorithm (ECPA), is proposed to design the topology of NN by incorporating constructive and pruning methods in an evolutionary way. The ECPA starts from a set of NNs with the simplest possible structures, one hidden neuron connected to an input node. It then adds hidden neurons and connections by using the network crossover and mutation to increase the processing capabilities of NNs. Furthermore, a cluster-based pruning is proposed to prune insignificant neurons in a stochastic way. An age-based survival selection is proposed to delete old NNs with potentially complex structures and then introduce new NNs with the simplest possible structures. Numerical and experimental results of prediction problems show the effectiveness and feasibility of the proposed methods.
摘要 i
ABSTRACT iii
誌謝 i
CONTENTS vii
LIST OF FIGURES ix
LIST OF TABLES xi
SYMBOLS xii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Literature Survey 3
1.2.1 On-Line Parameter Learning 3
1.2.2 Structure Learning 5
1.3 Organization of Dissertation 10
Chapter 2 On-Line Parameter Learning for Prediction 12
2.1 Feedforward-Neural-Network-Aided Grey Model 12
2.1.1 Neural Networks 12
2.1.2 First-Order Single Variable Grey Model 14
2.1.3 Structure of FNAGM 16
2.2 On-Line Parameter Learning of FNAGM 17
2.2.1 On-Line Batch Training 18
2.2.2 Convergence Analysis 21
2.3 Numerical Results 22
2.3.1 Example 1: Disturbance Prediction 22
2.3.2 Example 2: Chaotic Time Series Prediction 25
2.4 Experimental Results 27
2.4.1 Trajectory Prediction 28
2.4.2 Tracking Control 31
2.5 Summary 34
Chapter 3 Neuron-Based Structure Learning for Prediction 35
3.1 Structure Learning Based on Symbiotic Evolution 35
3.2 Symbiotic Structure Learning Algorithm 36
3.2.1 Initialization Phase 37
3.2.2 Evaluation Phase 41
3.2.3 Reproduction Phase 43
3.3 Numerical Results 48
3.3.1 Example 1: Chaotic Time Series Prediction 49
3.3.2 Example 2: Object Trajectory Prediction 53
3.4 Summary 56
Chapter 4 Network-Based Structural Learning for Prediction 58
4.1 Basic Concept of Evolutionary algorithm 58
4.2 Evolutionary Constructive and Pruning Algorithm 59
4.2.1 Encoding Scheme and Design Mechanism 61
4.2.2 Network Crossover 62
4.2.3 Network Mutation 64
4.2.4 Cluster-Based Pruning 65
4.2.5 Age-Based Survival Selection 68
4.3 Numerical Results 69
4.3.1 Example 1: Chaotic Time Series Prediction 71
4.3.2 Example 2: Forecasting the Number of Sunspots 76
4.3.3 Example 3: Vehicle Count Prediction 79
4.3.4 Effect of CBP and ABSS 83
4.3.5 Discussion 85
4.4 Summary 86
Chapter 5 Conclusion and Future Work 88
Bibliography 90
Vita 102
Publication List 103
[1] J. D. Markel and A. H. Gray, Linear Prediction of Speech. New York: Springer Verlag, 1976.
[2] J. Makhoul, “Linear prediction: A tutorial review,” Proceedings of the IEEE, vol. 63, pp. 561-580, Apr. 1975.
[3] J. L. Deng, “Introduction to grey system theory,” The Journal of Grey System, vol. 1, no. 1, pp. 1–24, 1989.
[4] K. Jinno, S. Xu, R. Berndtsson, A. Kawamura, and M. Matsumoto, “Prediction of sunspots using reconstructed chaotic system equations,” J. Geophys. Res., vol. 100, pp. 14 773–14 781, 1995.
[5] M. Han, J. Xi, S. Xu, and F.-L. Yin, “Prediction of chaotic time series based on the recurrent predictor neural network,” IEEE Trans. Signal Processing, vol. 52, no. 12, pp. 3409-3416, 2004.
[6] D. Massicotte, R. Morawski, and A. Barwicz, “Incorporation of a positivity constraint into a Kalman-filter-based algorithm for correction of spectrometric data,” IEEE Trans. Instrumentation and Measurement, vol. 44, no. 1, pp. 2-7, 1995.
[7] H. Yoo and R. L. Pimmel, “Short term load forecasting using a self-supervised adaptive neural network,” IEEE Trans. Power Systems, vol. 14, no. 2, pp. 779-784, 1999.
[8] M. Adya and F. Collopy, “How effective are neural networks at forecasting and prediction? A review and evaluation,” Journal of Forecasting, vol. 17, pp. 481–495, 1998.
[9] C. M. Zealand, D. H. Burn, and S. P. Simonovic, “Short term streamflow forecasting using artificial neural networks,” Journal of Hydrology, vol. 214, pp. 32-48, 1999.
[10] S. H. Yang and Y. P. Chen, “Intelligent forecasting system using grey model combined with neural network,” International Journal of Fuzzy Systems, vol. 13, no. 1, pp. 8-15, 2011.
[11] B. Kosko, Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence. Prentice-Hall, Upper Saddle River, NJ, USA, 1992, ch2.
[12] S. Haykin, Neural Network—A Comprehensive Foundation. Prentice Hall PTR, 1994, ch. 6.
[13] D. Saad, On-Line Learning in Neural Networks. Cambridge University Press, 1998, ch. 1.
[14] A. Browne, Neural Network Analysis, Architectures, and Applications. CRC Press, 1997, ch6.
[15] K. J. Hunt, G. W. Irwin, and K. Warwick, Neural Network Engineering in Dynamic Control Systems. Springer-Verlag, London, 1995, ch12.
[16] R. Hoptroff, “The principles and practive of time series forecasting and business modelling using neural nets,” Neural Computing and Applications, vol. 1, pp. 59-66, 1993.
[17] F. Wong, “Time series forecasting using backpropagation networks,” Neurocomputing, vol. 2, no. 4, pp. 147-159, 1991.
[18] S. Kang and C. Isik, “Partially connected feedforward neural networks structured by input types,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 175-184, 2005.
[19] F. Kamran, R. G. Harley, B. Burton, T. G. Habetler, and M. A. Brooke, “A fast on-line neural-network training algorithm for a rectifier regulator,” IEEE Trans. on Power Electronics, vol. 13, no. 2, pp. 366–371, 1998.
[20] L. Grippo, “Convergent on-line algorithms for supervised learning in neural networks,” IEEE Trans. on Neural Netw., vol. 11, no. 6, pp. 1284–1299, 2000.
[21] G. L. Plett, “Adaptive inverse control of linear and nonlinear systems using dynamic neural networks,” IEEE Trans. on Neural Netw., vol. 14, no. 2, pp. 360–376, 2003.
[22] D. Saad and S. A. Solla, “Dynamics of on-line gradient descent learning for multilayer neural networks,” In Touretzky, D. S., Mozer, M. C., and Hasselmo, M. E., editors, Advances in Neural Information Processing Systems, vol. 8, pp. 302-308, Cambridge, MA. MIT Press, 1996.
[23] D. Saad and M. Rattray, “Globally optimal parameters for on-line learning in multilayer neural networks,” Phys. Rev. Lett., vol. 79, pp. 2578-2581, 1997.
[24] J. L. Deng, “Control problems of grey systems,” System & Control Letters, vol. 1, no. 5, pp. 288–294, 1982.
[25] W. L. Yao, S. C. Chi, and J. H. Chen, “An improved grey-based approach for electricity demand forecasting,” Electric Power Systems Research, vol. 67, no. 3, pp. 217–224, 2003.
[26] B. Ulutas, E. Erdemir, and K. Kawamura, “Application of a hybrid controller with non-contact impedance to a humanoid robot,” in International Workshop on Variable Structure Systems, pp. 378–383, 2008.
[27] H. C. Ting, J. L. Chang, C. H. Yeh, and Y. P. Chen, “Discrete time sliding-mode control design with grey predictor,” International Journal of Fuzzy Systems, vol. 9, no. 3, pp. 179–185, 2007.
[28] S. Fan, Y. Fang, W. Li, Y. Ma, and T. Xiao, “The combination of grey system and BP neural network,” in International Conference on Mechatronics and Automation, pp. 1267–1271, 2007.
[29] C. C. Chiang, M. C. Ho, and J. A. Chen, “A hybrid approach of neural networks and grey modeling for adaptive electricity load forecasting,” Neural Computing & Applications, vol. 15, no. 3, pp. 328–338, 2006.
[30] F. Wang and H. Xia, “Network traffic prediction based on grey neural network integrated model,” in International Conference on Computer Science and Software Engineering, pp. 915–918, 2008.
[31] C. Zhu and Q. Ju, “United grey system-neural network model and its application in prediction of groundwater level,” in International Conference on Industrial Mechatronics and Automation, pp. 434–437, 2009.
[32] P. Fu and Y. Li, “Application of combined model in forecasting logistic volume of a port,” in Intelligent Computation Technology and Automation, pp. 742–745, 2010.
[33] D. Zhang, Z. Ren, Y. Bi, D. Zhou, and Y. Bi, “Power load forecasting based on grey neural network,” in IEEE International Symposium on Industrial Electronics, pp. 1885–1889, 2008.
[34] S. H. Wang, R. L. Hao, Y. J. Chang, and Y. Zhao, “Research of short-term load forecasting based on combined grey neural network and phase space reconstruction,” in International Conference on Machine Learning and Cybernetics, pp. 1194–1199, 2009.
[35] X. Zhu, “Application of composite grey BP neural network forecasting model to motor vehicle fatality risk,” in International Conference on Computer Modeling and Simulation, pp. 236–240, 2010.
[36] B. R. Chang and H. F. Tsai, “Forecast approach using neural network adaptation to support vector regression grey model and generalized auto-regressive conditional heteroscedasticity,” Expert Systems with Applications, vol. 34, no. 2, pp. 925–934, 2008.
[37] C. C. Hsu and C. Y. Chen, “Applications of improved grey prediction model for power demand forecasting,” Energy Conversion and Management, vol. 44, no. 14, pp. 2241–2249, 2003.
[38] H. Liu, L. Cai, and X. Wu, “Grey-RBF neural network prediction model for city electricity demand forecasting,” in International Conference on Wireless Communications, Networking and Mobile Computing, pp.1–5, 2008.
[39] S. H. Yang and Y. P. Chen, “Intelligent forecasting system based on grey model and neural network,” in IEEE/ASME International Conf. on Advanced Intelligent Mechatronics, pp. 699–704, 2009.
[40] M. F. Yeh, C. T. Chang, and M. S. Leu, “Financial distress prediction model via GreyART network and grey model,” Advances in Neural Network Research and Applications, vol. 67, no. 1, pp. 91–100, 2010.
[41] J. Tanomaru and S. Omatu, “Process control by on-line trained neural controller,” IEEE Trans. on Industrial Electronics, vol. 39, no. 6, pp. 511–521, 1992.
[42] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Contr., Signals, Syst., vol. 2, no. 4, pp. 303-314, 1989.
[43] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, pp. 359–366, 1989.
[44] S. T. Chen, D. C. Yu, and A. R. Moghaddamjo, “Weather sensitive short-term load forecasting using nonfully connected artificial neural network,” IEEE Trans. on Power Systems, vo1. 7, no. 3, pp. 1098–1105, 1991.
[45] A. Canning and E. Gardner, “Partially connected models of neural networks,” J. Phys. A, vol. 21, pp. 3275–3284, 1988.
[46] S. H. Yang and Y. P. Chen, “Symbiotic neuron evolution of a neural-network-aided grey model for time series prediction,” IEEE International Conference on Fuzzy Systems, pp. 195-201, 2011.
[47] D. Elizondo and E. Fiesler, “A survey of partially connected neural networks,” Int. J. Neural Syst., vol. 8, no. 5 and 6, pp. 535-558, 1997.
[48] F. H. F. Leung, H. K. Lam, S. H. Ling, and P. K. S. Tam, “Tuning of the structure and parameters of a neural network using an improved genetic algorithm,” IEEE Trans. Neural Netw., vol. 14, no. 1, pp. 79- 88, Jan 2003.
[49] T. Y. Kwok and D. Y. Yeung, “Constructive algorithms for structure learning in feedforward neural networks for regression problems,” IEEE Trans. Neural Netw., vol. 8, no. 3, pp. 630–645, May 1997.
[50] R. Reed, “Pruning algorithms—A survey,” IEEE Trans. Neural Netw., vol. 4, no. 5, pp. 740–747, Sep. 1993.
[51] A. P. Engelbrecht, “A new pruning heuristic based on variance analysis of sensitivity information,” IEEE Trans. Neural Netw., vol.12, no.6, pp.1386–1399, Nov. 2001.
[52] Jocelyn Sietsma, Robert J.F. Dow, “Creating artificial neural networks that generalize,” Neural Networks, vol. 4, no. 1, pp. 67–79, 1991.
[53] Y.-C. Ho, “Perturbation analysis explained,” IEEE Trans. Automat. Contr., vol. 33, pp. 761–763, 1988.
[54] J. M. Holtzman, “On using perturbation analysis to do sensitivity analysis: Derivatives versus differences,” IEEE Trans. Automat. Contr., vol. 37, pp. 243–247, 1992.
[55] M. Hagiwara, “Removal of hidden units and weights for back propagation networks,” International Joint Conference on Neural Networks, pp. 351–354, Oct. 1993.
[56] Y. Hirose, K. Yamashita, and S. Hijiya, “Back-propagation algorithm which varies the number of hidden units,” Neural Netw., vol. 4, no. 1, pp. 61–66, 1991.
[57] Md. M. Islam and K. Murase, “A new algorithm to design compact twohidden-layer artificial neural networks,” Neural Netw., vol. 14, no. 9, pp. 1265–1278, 2001.
[58] I. Rivals and L. Personnaz, “Neural-network construction and selection in nonlinear modeling,” IEEE Trans. Neural Netw., vol. 14, no. 4, pp. 804–819, 2003.
[59] M. M. Islam, M. A. Sattar, M. F. Amin, X. Yao, and K. Murase, “A new adaptive merging and growing algorithm for designing artificial neural networks,” IEEE Trans. Syst., Man, Cybern. B, vol. 39, no. 3, pp. 705–722, 2009.
[60] D. B. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. New York: IEEE Press, 1995.
[61] D. Whitley and C. Bogart, “The evolution of connectivity: Pruning neural networks using genetic algorithms,” International Joint Conference on Neural Networks, pp. 134–137, 1990.
[62] D. White and P. Ligomenides, “GANNet: A genetic algorithm for optimizing topology and weights in neural network design ,” in International Workshop on Artificial Neural Networks, in New Trends in Neural Computation, J. Mira, J. Cabestany, and A. Prieto, Eds. Berlin, Germany: Springer-Verlag, pp. 332–327, 1993.
[63] C. Zanchettin, T. B. Ludermir, and L. M. Almeida, “Hybrid Training Method for MLP: Optimization of Architecture and Training,” IEEE Trans. Syst., Man, Cybern. B, vol. 41, no. 4, pp. 1097-1109, 2011.
[64] S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall, 1995.
[65] P. A. Gutierrez, C. Hervas-Martinez, F. J. Martinez-Estudillo, “Logistic Regression by Means of Evolutionary Radial Basis Function Neural Networks,” IEEE Trans. Neural Netw., vol.22, no.2, pp.246-263, Feb. 2011.
[66] T. H. Oong and N. A. M. Isa, “Adaptive Evolutionary Artificial Neural Networks for Pattern Classification,” IEEE Trans. Neural Netw., vol.22, no.11, pp.1823-1836, Nov. 2011.
[67] J. C. F. Caballero, F. J. Martinez, C. Hervas, and P. A. Gutierrez, “Sensitivity versus accuracy in multiclass problems using memetic Pareto evolutionary neural networks,” IEEE Trans. Neural Netw., vol. 21, no. 5, pp. 750–770, May 2010.
[68] D. Mantzaris, G. Anastassopoulos, and A. Adamopoulos, “Genetic algorithm pruning of probabilistic neural networks in medical disease estimation,” Neural Netw., vol. 24, no. 8, pp. 831-835, October 2011.
[69] B. Curry and P. H. Morgan, “Seasonality and neural networks: a new approach,” International Journal of Metaheuristics, vol. 1, no. 2, pp. 181 - 197, 2010.
[70] D.-S. Huang and J.-X. Du, “A Constructive Hybrid Structure Optimization Methodology for Radial Basis Probabilistic Neural Networks,” IEEE Trans. Neural Netw., vol.19, no.12, pp.2099-2115, Dec. 2008.
[71] T. A.S. Masutti and L. N. de Castro, “Neuro-immune approach to solve routing problems,” Neurocomputing, vol. 72, no. 10-12, pp. 2189-2197, June 2009.
[72] A. Kaylani, M. Georgiopoulos, M. Mollaghasemi, and G.C. Anagnostopoulos, “AG-ART: An adaptive approach to evolving ART architectures,” Neurocomputing, vol. 72, no. 10-12, pp. 2079-2092, June 2009.
[73] C.-K. Goh, E.-J. Teoh, and K. C. Tan, “Hybrid Multiobjective Evolutionary Design for Artificial Neural Networks,” IEEE Trans. Neural Netw., vol.19, no.9, pp.1531-1548, Sept. 2008.
[74] C. Hervas-Martinez, F. J. Martinez-Estudillo, and M. Carbonero-Ruz, “Multilogistic regression by means of evolutionary product-unit neural networks,” Neural Netw., vol. 21, no. 7, pp. 951-961, September 2008.
[75] P. J. Angeline, G. M. Saunders, and J. B. Pollack, “An evolutionary algorithm that constructs recurrent neural networks,” IEEE Trans. Neural Netw., vol. 5, no. 1, pp. 54–65, Jan. 1994.
[76] P. P. Palmes, T. Hayasaka, and S. Usui, “Mutation-based genetic neural network,” IEEE Trans. Neural Netw., vol. 16, no. 3, pp. 587–600, 2005.
[77] T. B. Ludermir, A. Yamazaki, and C. Zanchettin, “An optimization methodology for neural network weights and architectures,” IEEE Trans. Neural Netw., vol.17, no.6, pp.1452-1459, 2006.
[78] J.-Y. Lin and Y.-P. Chen, “Analysis on the Collaboration Between Global Search and Local Search in Memetic Computation,” IEEE Trans. Evol. Comput., vol.15, no.5, pp.608-623, 2011.
[79] M. S. Alam, M. M. Islam, X. Yao, and K. Murase, “Recurring Two-Stage Evolutionary Programming: A Novel Approach for Numeric Optimization,” IEEE Trans. Syst., Man, Cybern. B, vol.41, no.5, pp.1352-1365, 2011.
[80] W. E. Hart, “Adaptive global optimization with local search,” Ph.D. dissertation, Dept. Comput. Sci. Eng., Univ. California, San Diego, 1994.
[81] G. Zhang, B. E. Patuwo, and M. Y. Hu, “Forecasting with artificial neural networks: The state of the art,” International Journal of Forecasting, vol. 14, no. 1, pp. 35–62, 1998.
[82] B. Kosko, Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence. Prentice-Hall, Upper Saddle River, NJ, USA, 1992, ch2.
[83] M. T. Hagan and M. B. Menhaj, “Training feedforward networks with the Marquardt algorithm,” IEEE Trans. Neural Netw., vol. 5, no. 6, pp.989–993, 1994.
[84] Y. H. Pao, Adaptive Pattern Recognition and Neural Networks. Addison-Wesley Publishing Co., Reading, MA, USA, 1989.
[85] D. Gorinevsky, “An approach to parametric nonlinear least square optimization and application to task-level learning control,” IEEE Trans. on Automatic Control, vol. 42, no. 7, pp.912–927, 1997.
[86] M. C. Mackey and L. Glass, “Oscillation and chaos in physiological control systems,” Science, vol. 197, no. 4300, pp. 287–289, 1977.
[87] S. H. Yang, C. Y. Ho, and Y. P. Chen, “Neural network based stereo matching algorithm utilizing vertical disparity,” in IECON 2010, pp.1155-1160, 2010.
[88] S. H. Yang, C. Y. Ho and Y. P. Chen, “Intelligent stereo matching algorithm based on Hopfield neural network and genetic algorithm,” Far East Journal of Experimental and Theoretical Artificial Intelligence, vol. 6, no. 1-2, pp. 1-23, 2011.
[89] E. Kayacan and O. Kaynak, “An adaptive grey PID-type fuzzy controller design for a non-linear liquid level system,” Trans. Inst. Meas. Control, vol. 31, no. 1, pp. 33-49, 2009.
[90] R.E. Smith, S. Forrest and A.S. Perelson, “Searching for diverse, cooperative populations with genetic algorithms,” Evol. Comput., vol. 1, no. 2, pp. 127–149, 1993.
[91] D. E. Moriarty and R. Miikkulainen, “Efficient reinforcement learning through symbiotic evolution,” Mach. Learn., vol. 22, pp. 11–32, 1996.
[92] C. F. Juang, J. Y. Lin, and C. T. Lin, “Genetic reinforcement learning through symbiotic evolution for fuzzy controller design,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 30, no. 2, pp. 290–302, 2000.
[93] C. J. Lin and Y. J. Xu, “A self-adaptive neural fuzzy network with groupbased symbiotic evolution and its prediction applications,” Fuzzy Sets Syst., vol. 157, no. 8, pp. 1036–1056, 2006.
[94] C. J. Lin, C. H. Chen, and C. T. Lin, “Efficient self-evolving evolutionary learning for neurofuzzy inference systems,” IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1476–1490, 2008.
[95] J.-J. Hu and T.-H. S. Li, “Genetic regulatory network-based symbiotic evolution,” Expert Systems with Applications, vol. 38, no. 5, pp. 4756-4773, 2011.
[96] J.-J. Hu, T.-H. S. Li, and Y.-T. Su, “A novel particle swarm-based symbiotic evolutionary algorithm for a class of multi-modal functions,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 4, pp. 1905-1920, 2011.
[97] Y.-C. Hsu and S.-F. Lin, “Reinforcement group cooperation-based symbiotic evolution for recurrent wavelet-based neuro-fuzzy systems,” Neurocomputing, vol. 72, no. 10-12, pp. 2418-2432, 2009.
[98] Y.-C. Hsu, S.-F. Lin, and Y.-C. Cheng, “Multi groups cooperation based symbiotic evolution for TSK-type neuro-fuzzy systems design,” Expert Systems with Applications, vol. 37, no. 7, pp. 5320-5330, 2010.
[99] S. H. Yang and Y. P. Chen, “Intelligent forecasting system using grey model combined with neural network,” International Journal of Fuzzy Systems, vol. 13, no. 1, pp. 8-15, 2011.
[100] D. E. Rumelhart, G. E. Hinton, and R. J. Wiliams, ‘‘Learning representations by back-propagating errors,’’ Nature, vol. 323, pp. 533–536, Oct. 9, 1986.
[101] D. Dumitrescu, B. Lazzerini, L. C. Jain, and A. Dumitrescu, Evolutionary Computation. Boca Raton, FL: CRC Press, 2000, ch4.
[102] K. Deb and R. B. Agrawal, Simulated binary crossover for continuous search space, in Complex Syst., vol. 9, pp. 115–148, 1995.
[103] G. Zhang, B. E. Patuwo, and M. Y. Hu, “Forecasting with artificial neural networks: The state of the art,” International Journal of Forecasting, vol. 14, no. 1, pp. 35–62, 1998.
[104] C. Zanchettin, T. B. Ludermir, and L. M. Almeida, “Hybrid Training Method for MLP: Optimization of Architecture and Training,” IEEE Trans. Syst., Man, Cybern. B, vol. 41, no. 4, pp. 1097-1109, 2011.
[105] J. M. Zurada, A. Malinowski, and I. Cloete, “Sensitivity analysis for minimization of input data dimension for feedforward neural network,” International Symposium on Circuits and Systems, pp. 447–450, 1994.
[106] J. M. Zurada, A. Malinowski, and S. Usui, “Perturbation method for deleting redundant inputs of perceptron networks,” Neurocomputing, vol. 14, no. 2, pp. 177–193, 1997.
[107] A. P. Engelbrecht and I. Cloete, “A sensitivity analysis algorithm for pruning feedforward neural networks,” International Conference on Neural Networks, pp. 1274–1278, 1996.
[108] Tukey, J. W. Exploratory Data Analysis. Addison Wesley, Reading, MA, 1977.
[109] A. D. Cioppa, C. D. Stefano, and A. Marcelli, “On the role of population size and niche radius in fitness sharing,” IEEE Trans. Evol. Comput., vol.8, no.6, pp. 580- 592, 2004.
[110] H. Du and N. Zhang, “Time series prediction using evolving radial basis function networks with new encoding scheme,” Neurocomputing, vol. 71, no. 7-9, pp. 1388-1400, 2008.
[111] J.-S. R. Jang, “ANFIS: Adaptive-network-based fuzzy inference system,” IEEE Trans. Syst., Man, Cybern., vol. 23, no. 3, pp. 665–685, 1993.
[112] C. Harpham and C.W. Dawson, “The effect of different basis functions on a radial basis function network for time series prediction: a comparative study,” Neurocomputing, vol. 69, no. 16-18, pp. 2161-2170, 2006.
[113] I. Rojas, H. Pomares, J.L. Bernier, J. Ortega, B. Pino, F.J. Pelayo and A. Prieto, “Time series analysis using normalized PG-RBF network with regression weights,” Neurocomputing, vol. 42, no. 1-4, pp. 267-285, 2002.
[114] Y. Chen, B. Yang and J. Dong, “Time-series prediction using a local linear wavelet neural network,” Neurocomputing, vol. 69, no. 4-6, pp. 449-465, 2006.
[115] K. B. Cho and B. H. Wang, “Radial basis function based adaptive fuzzy systems and their applications to system identification and prediction,” Fuzzy Sets Syst., vol. 83, no. 3, pp. 325-339, 1996.
[116] S. H. Ling, F. H. F. Leung, H. K. Lam, Y. S. Lee and P. K. S. Tam, “A novel genetic-algorithm-based neural network for short-term load forecasting,” IEEE Transactions on Industrial Electronics, vol. 50, no. 4, pp. 793–799, 2003.
[117] W. K. Wong, M. Xia, W. C. Chu, “Adaptive neural network model for time-series forecasting,” European Journal of Operational Research, vol. 207, no. 2, pp. 807–816, 2010.
[118] G. P. Zhang, “Time series forecasting using a hybrid ARIMA and neural network model,” Neurocomputing, vol. 50, pp. 159–175, 2003.
[119] M. Kulesh, M. Holschneider, and K. Kurennaya, “Adaptive metrics in the nearest neighbours method,” Physics D, vol. 237, pp. 283–291, 2008.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top