[1] B. Widrow and R. Winter, “Neural nets for adaptive filtering and adaptive pattern recognition”, IEEE Computer Society, Vol. 21, No. 3, pp. 25-39, 1988.
[2] T. F. Burks, S. A. Shearer, J. R. Heath, and K. D. Donohue, “Evaluation of neural-network classifiers for weed species discrimination”, Biosystems Engineering, pp. 293-304, 2005.
[3] C. M. Kuan and K. Hornik, “Convergence of learning algorithms with constant learning rates”, IEEE Transactions on Neural Networks, Vol. 2, No. 5, pp. 484-490, 1991.
[4] J. A. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of Neural Computation, 1991.
[5] J. K. Kruschke and J. R. Movellen, “Benefits of gain: Speeded learning and minimal hidden layers in back-propagation networks”, IEEE Transactions on Neural Networks, vol. 21, pp. 273-280, 1991.
[6] N. B. Karayiannis and A. N. Venetsanopoulos, “Efficient learning algorithms for neural networks”, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 23, No. 5, pp. 1372-1383,1993.
[7] R. L. Watrous, “learning algorithms for connectionist networks: Applied gradient methods of nonlinear optimization”, IEEE First International Conference on Neural Networks, Vol. 2, pp. 619-627, 1987.
[8] M. Khalid, S.Omatu, and R. Yusof, “Self learning process control systems by neural networks”, Proceedings of the 31st IEEE Conference on Decision and Control, Vol. 1, pp. 889-894, 1992.
[9] P. R. Nachtsheim, “A first order adaptive learning rate algorithm for backpropagation networks”, International Conference on Neural Networks, IEEE World Congress on Computation Intelligence, Vol. 1, No. 27, pp. 257-262, 1994.
[10] V. J. Mathews and Z. Xie, “A stochastic gradient adaptive filter with gradient adaptive step size”, IEEE Transaction on Signal Processing, Vol. 41, No. 6, pp. 2075-2087,1993.
[11] W. Ang and B. F. Boroujeny, “A new class of gradient adaptive step-size LMS algorithm”, IEEE Transactions on Signal Processing, Vol. 49, No. 4, pp. 805-810, 2001.
[12] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representation by error propagation”, Parralel Distributed Processing, Vol. 1, pp. 484-490, 1991.
[13] W. J. Wei, Z. J. Li, L. S. Wei, and H. Zhen, “The improvements of BP neural network learning algorithm”, IEEE International Conference on Signal proceeding proceedings, Vol. 3, pp. 1647-1649, 2000.
[14] S. C. Ng, C. C. Cheung, and S. H. Leung, “Deterministic weight modification algorithm for efficient learning”, Proceedings. 2004 IEEE International Conference on Neural Networks, Vol. 2, pp. 1033-1038, 2004.
[15] C. C. Yu and B. D. Liu, “A backpropagation algorithm with adaptive learning rate and momentum coefficient”, IEEE International Joint Conference on Neural Network, Vol. 2, No. 12-17, pp. 1218-1223, 2002.
[16] V. V. Phansalkar and p. S. Sastry, “Analysis of the backpropagation algorithm with momentum”, IEEE Transaction on Neural Networks, Vol. 5, No. 3, pp. 505-506, 1994.
[17] Y. F. Yam and T. W. S. Chow, “Extended back-propagation algorithm”, Electronics Letters, Vol. 29, No. 19, pp. 1701-1702, 1993.
[18] A. A. Minia and R. D. Williams, “Acceleration of backpropagation through learning and momentum adaptation”, IEEE International Joint Conference on Neural Network, Vol. 1, pp. 676-679, 1990.
[19] C. Charalambous, “Conjugate gradient algorithm for efficient training of artificial neural networks”, IEE Proceedings-G, Vol. 139, No. 3, pp. 301- 310, 1992.
[20] M. Towsey, D. Alpsan, and L. Sztriha, “Training a neural network with conjugate gradient methods”, Proceedings of IEEE International Conference on Neural Networks, Vol. 1, pp. 373-378, 1995.
[21] 羅華強,「類神經網路-MATLAB的應用」,高立圖書有限公司,2005年。
[22] 歐崇明,「MATLAB使用入門手冊」,高立圖書有限公司,民國89年。
[23] 吳駖,「MATLAB程式設計應用實務」,文魁資訊股份有限公司,2005年。
[24] 李志陞,「具又修正式學習法則之Sigma-Pi型網路研究」,義守大學電機工程學系碩士班碩士論文,民國92年。[25] 黃督周,「多變量迴歸於類神經網路學習之研究」,義守大學電機工程學系碩士班碩士論文,民國93年。[26] Y. Manabe, B. Chakraborty, and H. Fujita, “Structural learning of multilayer feed forward neural networks for continuous valued functions”, Proceedings of the 9th Information Processing, Vol. 1, 2002.
[27] S. H. Oh, “Improving the error backpropagation algorithm with a modified error function”, IEEE Transactions on Neural Networks, Vol. 8, No. 3, pp. 799-803, 1997.
[28] S. Abid, F. Fnaiech, and M. Najim, “A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm”, IEEE Transactions on Neural Network, Vol. 12, pp. 424-430, 2001.
[29] 張斐章,張麗秋,黃浩倫編著,「類神經網路:理論與實務」,高立圖書有限公司,民國90年。
[30] 蔡孟陶,「倒傳遞類神經網路收斂分析之研究」,海洋大學機械與機電工程學系研究所碩士學位論文,民國94年。[31] 施柏屹,「倒傳遞類神經網路收斂之初步探討」,國立中央大學機械工程研究所,民國89年。[32] Y. Wu and H. Shi, “Improvement of neural network learning algorithm and its application in control”, Proceedings of the 3rd World Conquress on Neural Network, Vol. 2, pp. 971-975, 2005.
[33] M. T. Hagan, H. B. Demuth, and M. Beale, Neural Network Design, Thomson, 1996.
[34] 蘇柏仁,「PC-Based 遠端即時監控於智慧型伺服馬達之研究」,海洋大學機械與機電工程學系研究所碩士學位論文,民國93年。[35] L. Wang, Y. Wang, and Q. Guo, “Neural network based brushless DC motor servo system”, Industrial Electronics Society, 1998. IECON '98. Proceedings of the 24th Annual Conference of the IEEE, Vol. 1, pp. 67-71, 1998.
[36] M. Elbuluk, T. Liu, and I. Husain, “Neural network-based model reference adaptive systems for high performance motor drives and motion controls”, IEEE Transactions on Industry Applications, Vol. 38, pp. 879-886, 2002.
[37] 葉宜成,「類神經網路模式應用與實作」,儒林圖書有限公司,1993年。
[38] Z. Liu, X. Zhuang, and S. Wang, “Speed control of a DC motor using BP neural networks”, Proceedings of IEEE Conference on Control Applications, Vol. 1, pp. 23-25, 2003.
[39] R. K. Brouwer, “Fuzzy rule extraction from a feed forward neural network by training a representative fuzzy neural network using gradient descent”, Proceeding of IEEE International Conference on Industrial Technology, Vol. 3, pp. 1168-1172, 2004.
[40] P. Zheng, Q. Keyun, and X. Yang, “Dynamic adaptive fuzzy neural-network identification and its application”, IEEE International Conference on Systems, Man and Cybernetics, Vol. 5, pp. 4974-4979, 2003.