|
[1] H. Alam, F. Rahman, Y. Tarnikova and R. Hartono, “A pair-wise decision fusion framework: recognition of human faces”, Proceedings of the sixth International Conference of Information Fusion, vol. 2, pp. 1484-1489, 2003 [2] J.L. An, Z.O. Wang and Z.P. Ma, “An incremental learning algorithm for support vector machine”, Proceedings of the Second International Conference on Machine Learning and Cybernetics, vol. 2, pp. 1153-1156, Nov. 2003 [3] P.N. Bellhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997 [4] C.J.C. Burges, “A tutorial on support vector machines for pattern recognition”, Data Mining Knowledge Discovery, 2, pp. 121-167, 1998 [5] R. Cappelli, D. Maio and D. Maltoni, “Subspace adaptation for incremental face learning”, International Conference on Control, Automation, Robotics and Vision, vol. 2, pp.974-979, 2-5 Dec. 2002 [6] S. Chen, S.R. Gunn, and C.J. Harris, “The relevance vector machine technique for channel equalization application”, IEEE Trans. on Neural Networks, vol. 12, pp. 1529-1532, Nov. 2001 [7] J.T. Chien and C.H. Huang, “Online speaker adaptation based on quasi-Bayes linear regression”, Proceedings of International Conference Acoustics, Speech, Signal Processing, vol. 1, pp. 329-332, 7-11 May 2001 [8] J.T. Chien and C.H. Huang, “Bayesian duration modeling and learning for speech recognition”, Proceedings of International Conference Acoustics, Speech, Signal Processing, vol. 1, pp. 1005-8, 17-21 May 2004 [9] J.T. Chien and C.H. Huang, “Bayesian learning of speech duration models”, IEEE Trans. on Speech and Audio Processing, vol. 11, issue 6, pp. 558-567, Nov. 2003 [10] J.T. Chien, “Quasi-Bayes linear regression for sequential learning of hidden Markov models”, IEEE Trans. on Speech and Audio Processing, vol. 10, issue 5, pp. 268-278, July 2002 [11] C. Cortes and V. Vapnik, “Support-vector Networks”, Machine Learning, vol. 20, pp. 273-297, 1995 [12] N. Cristianini and J. Shawe-Taylor, An Introduction to Support VectorMachines: Cambridge Univ. Press, 2000 [13] C.P. Diehl and G. Cauwenberghs, “SVM incremental learning, adaptation and optimization”, Proceedings of the International Joint Conference on Neural Networks, vol. 4, pp. 2685-2690, July 2003 [14] K. Fukunaga, “Introduction to statistical pattern recognition”, Academic Press, second edition, 1991 [15] T.V. Gestel, A.J.K. Suykens, G. Lanckriet, A. Lambrechts, B.D. Moor, and J. Vandewalle, “A Bayesian framework for least squares support vector mahine classifiers, Gaussian processes and kernel Fisher discriminant analysis”, Neural Computation, vol. 14, no. 5, pp. 1115-1147, 2002 [16] T.V. Gestel, J.A.K. Suykens, D.-E. Baestaens, A. Lambrechts, G. Lanckriet, B. Vandaele, B.D. Moor and J. Vandewalle, “Financial time series prediction using least squares support vector machines within the evidence framework”, IEEE Trans. on Neural Networks, vol. 12, pp. 809-821, July 2001 [17] W. Huang, B.H. Lee, L. Li and K. Leman, “Face recognition by incremental learning”, IEEE International Conference on System, Man and Cybernetics, vol. 5, pp. 4718-4723, 5-8 Oct. 2003 [18] Q.-C. Joaquin and L. K. Hansen, “Time series prediction based on the relevance vector machine with adaptive kernels”, Proceeding of IEEE International Conference Acoustics, Speech, Signal Processing, vol. 1, pp. 13-17, 2002 [19] I.T. Jolliffe, Principal Component Analysis, ser. Springer Series in Statistics. New-Tork: Springer-Verlag, 1986 [20] S.S. Keerthi and E.G. Gilbert, “Convergence of a generalized SMO algorithm for SVM classifier design”, Machine Learning, vol. 46, pp. 351-360, 2002 [21] S.S. Keerthi and S.K. Shevade, “SMO Algorithm for least square SVM”, Proceedings of the International Joint Conference on Neural Networks, vol. 3, pp. 2088-2093, 20-24 July 2003 [22] K.I. Kim, K. Jung, S.H. Park and H.J. Kim, “Support vector machines for texture classification”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 11, pp. 1542-1550, Nov. 2002 [23] B.J.D. Kruif and T.J.A.D. Vries, “Pruning error minimization in least squares support vector machines”, IEEE Trans. on Neural Networks, vol. 4, no. 3, pp. 696-702, May 2003 [24] J.T. Kwok, “Moderating the output of support vector classifiers”, IEEE Trans. on Neural Networks, vol. 10, pp. 1018-1031, Sep. 1999 [25] J.T. Kwok, “The evidence framework applied to support vector machine”, IEEE Trans. on Neural Networks, vol. 11, pp.1162-1173, Sep. 2000 [26] G.R.G. Lanckriet, L.E. Ghaoui, C. Bhattacharyya, and M.I. Jordan, “A robust minimax approach to classification”, Machine Learning Research 3, pp. 555-582, 2002 [27] C.P. Liao, H.J. Lin, C.C. Huang and J.T. Chien, “Multiple human face detection in complex background”, Proceedings of 2002 Computer Graphics Workgroup, Tainan-Taiwan, June 2002 [28] D.J.C. Mackay, “Bayesian Interpolation”, Neural Computation, vol. 4, no. 3, pp. 415-447, May 1992 [29] D.J.C. Mackay, “The evidence framework applied to classification networks”, Neural Computation, vol. 4, no. 4, pp. 720-736, Sep. 1992 [30] A.M. Martinez, and A.C. Kak, “PCA versus LDA”, IEEE Trans. on Pattern Analysis and Machine Itelligence, vol. 23, pp. 228-233, Feb. 2001 [31] R. Mitra, C.A. Murthy and S.K. Pal, “Data condensation in large database by incremental learning with support vector machines”, International Conference on Pattern Recognition, Sept. 2000 [32] J.C Platt, “Sequential minimal optimization: a fast algorithm for training support vector machines”, Technical Report MSR-TR-98-14, Microsoft Research, Redmond [33] S. Rüping, “Incremental learning with support vector machines”, proceedings of the IEEE International Conference on Data Mining, 2001 [34] B. Schölkopf, K. Sung, C.J.C. Burges, F. Girosi, P. Niyogi, T. Pogio and V. Vapnik, “Comparing support vector machines with Gaussian kernels to radial basis function classifiers”, IEEE Trans. on Signal Processing, vol. 45, no. 11, pp. 2758-2765, 1997 [35] J.A.K. Suykens and J. Vandewalle, “Least square support vector machine classifiers”, Neural Process Letters, pp. 293-300, 1999 [36] J.A.K. Suykens, J. de Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: Robustness and sparse approximation”, Neural computation, 2002, [37] J.A.K. Suykens, T.V. Gestel, J. Vandewalle, and B.D. Moor, “A support vector machine formulation to PCA analysis and its kernel version”, IEEE Trans. on Neural Network, vol.14, no. 2, pp. 447-450, Mar. 2003 [38] N. Takahashi and T. Nishi, “Rigorous proof of termination of SMO algorithm for support vector machines”, IEEE Trans. on Neural Networks, vol. 16, no. 3, pp. 774-776, May 2005 [39] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine”, in Advances in Neural Information Processing Systems 12, Pages 652-658, MIT Press, 2000 [40] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine”, Machine Learning Research 1, pp.211-244, 2001 [41] V. Vapnik, Statistical Learning Theory. New York: Wiley, 1998 [42] V. Vapnik, The nature of statistical learning theory, New-York: Springer-Verlag, 1995
|