[1] I. Moghram and S. Ruhman, “Analysis and Evaluation Five of Load Forecasting Techniques,” IEEE Trans. on Power Systems, vol. 4, no. 4, pp. 1484–1491, 1889.
[2] M. T. Hagan and S. M. Behr, “The Time Series Approach to Short Term Load Forecasting,” IEEE Trans. on Power System, vol. PWRS-2, no. 3, pp. 832–837, 1987.
[3] S. Vemuri, W. L. Huang and D. L. Nelson, “On-Line Algorithms for Forecasting Hourly Loads of an Electrical Utility,” IEEE Trans. Power Apparatus and Systems, PAS-100, vol. 2, no. 8, pp. 3775–3784, Aug. 1981.
[4] H. Akagi, “New trends in active filter for power conditioning,” IEEE Trans. On Power Electronics, vol. 32, pp. 1312-1322, Jun. 1996.
[5] H. Yoo and R. L. Pimmel, “Short-term Load Forecasting Using a Self-Super Vised Adaptive Neural Network,” IEEE Trans. on Power Systems, vol. 14, no. 2, pp. 779-784, May 1999.
[6]J. Vermoak and E. C. Botha, “Recurrent Neural Networks for Short-Term Load Forecasting,” IEEE Trans. on Power Systems, vol. 13, no. 1, pp. 126-132, Feb. 1998.
[7]J. W. Taylor and R. Buizza, “Neural Network Load Forecasting with Weather Ensemble Predictions,” IEEE Trans. On Power Systems, vol. 17, no. 3, pp. 626-632, Aug. 2002.
[8]S. Osowskki and K. Siwek, “Regularisation of Neural Networks for Improved Load Forecasting in the Power System,” IEE Proc.-Gener. Trans. Distrib. , vol. 149, no. 3, pp. 340-344, May 2002.
[9] N. B. Karayiannis, M. Balasurbramanian and H. A. Malki, “Evaluation of Cosine Basis Function Neural Networks on Electric Power Load Forecasting,” Neural Networks, 2003 Proceedings of the International Joint Conference on, vol. 3, pp. 2100-2105, Jul. 20-24, 2003.
[10] L. M. Saini and M. K. Soni, “Artificial Neural Network Based Peak Load Forecasting Using Levenberg-Marquardt and quasi-Newton Methods,” Generation, Trans. And Distri. , IEE Proceedings , vol. 149, Issue 5, pp. 578-584, Sep. 2002.
[11] U. Fayyad, Gregory, Piatetsky-Shapiro and P. Smyth, “The KDD Process for Extracting Useful Knowledge from Volumes of Data,” Communications of the ACM, vol. 39, no. 11, pp. 27-34. Nov. 1996
[12] UniMiner探宇科技股份有限公司:http://www.uniminer.com/center01.htm,2010年6月。
[13]C. Cortes, and V. Vapnik, “Support vector networks,” Machine Learning, 20, pp. 273-297. 1995.
[14]N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, U.K.: Cambridge Univ. Press, 2000.
[15]B. Schölkopf and A. J. Smola, Learning with Kernel. Cambridge, MA: MIT Press, 2001.
[16]B. Schölkopf, A. J. Smola, R. Willianson and P. Bartlett, “New support vector algorithms,” Neural Comput. , vol. 12, no. 5, pp. 207–1245, 2000.
[17]C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining Knowl. Disc. , vol. 2, no. 2, pp. 121–168, 1998.
[18] W. M. Lin, C. H. Wu, C. H. Lin and F. S. Cheng, “Classification of Multiple Power Quality Disturbances Using Support Vector Machine and One-versus-One Approach,” IEEE International Conference on Power System Technology, Chongqing, China, Oct. 2006.
[19] C. W. Hsu and C. J. Lin, “A comparison of methods for multi-class support vector machines,” IEEE Trans. Neural Network, vol. 13, no2, pp. 415–425, Mar. 2002.
[20] 吳建賢, “軟計算於電力品質偵測與電機故障診斷之應用(Applications of Soft Computing for Power-Quality Detection and Electric Machinery Fault Diagnosis)” ,國立中山大學電機工程研究所博士論文,2008年10月。[21]林義隆, “基於支撐向量機之細胞神經網路韌性模板設計及Wilcoxon學習機之初步研究(SVM-based Robust Template Design of Cellular Neural Networks and Primary Study of Wilcoxon Learning Machines)” ,國立中山大學電機工程研究所博士論文,2006年12月29日。[22] B. J. Kruif and T. J. A. Vries, “Pruning Error Minimization in Least Squares Support Vector Machines,” IEEE Trans. On Neural Networks, vol. 14, no. 3, pp. 696-702, May 2003.
[23] T. V. Gestel, Johan A. K. Suykens, Dirk-Emma Baestaens, nnemie Lambrechts, Gert Lanckriet, Bruno Vandaele, Bart De Moor and Joos Vandewalle, “Financial Time Series Prediction Using Least Squares Support Vector Machines Within the Evidence Framework,” IEEE Trans. On Neural Networks, vol. 12, no. 4, pp. 809-821, Jul. 2001.
[24] V. Kecman, Learning and Soft Computing. Cambridge, MA: MIT Press, pp. 11-298, 2001.
[25] H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” Proceedings of 2nd Berkeley Symposium: 481-492, Berkeley: University of California Press, 1951.
[26]M. A. Aizerman, E. M. Braverman and L. I. Rozonoer, “Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning,” Autom. Remote Control, vol. 25, 1964.
[27] J. Kennedy and R. Eberhart, “Particle swarm optimization,” IEEE International Conference Neural Networks, 1995. Proceedings, vol. 4, pp. 1942–1948, 27 Nov.-1 Dec. 1995.
[28]Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” Proc. IEEE Evol. Comput. , vol. 3, pp. 69-73, 1999.
[29] F. S. Cheng, J. S. Tu, C. H. Lv and M. T. Tsay, “A Generalized Regression Neural Network for Solving Economic Dispatch Problem,” ICEE for the 21st Century With focus on Sustainability and Reliability, pp. 113, Jul. 2007.
[30]A. Ratnaweera, S. K. Halgamuge and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Trans. on Evolutionary Computatio, vol. 8, no. 3, pp. 240-255, Jun. 2004.
[31] K. T. Chaturvedi, M. Pandit and L. Srivastava, “Self-Organizing Hierarchical 102 Particle Swarm Optimization for Nonconvex Economic Dispatch,” IEEE Trans. on Power System, Vol. 23, Issue 3, pp. 1079-1087, Aug. 2008.
[32] M. Clerc and J. Kennedy, “The particle swarm-Explosion, stability, and convergence in a multidimensional complex space,” IEEE Trans. on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, Feb. 2002.