|
參考文獻
[1] B. Widrow and M. Hoff, “Adaptive switching circuits,” in IRE WESCON Conv. Rec., pp. 96-104, 1960. [2] J. Koford and G. Groner, “The use of an adaptive threshold element to design a linear optimal pattern classifier,” IEEE Transactions on Information Theory, vol. 12, no. 1, pp. 42-50, 1966. [3] F. Roseenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington, D. C. : Spartan Books, 1961. [4] D. Gabor, W. P. L. Wilby, and R. Woodcock, “A universal non-linear filter predictor and simulator which optimizes itself by a learning process,” Proceedings of the IEEE - Part B: Electronic and Communication Engineering, vol.108, no. 40, pp. 422-435, 1961. [5] B. Widrow, J. R. Glover, J. M. McCool, J. Kaunitz, C. S. Williams, R. H. Hearn, J. R. Zeidler, Jr. Eugene Dong, and R. C. Goodlin, “Adaptive noise cancelling: Principles and applications,” Proceedings of the IEEE, vol. 63, no. 12, pp. 1692-1716, 1975. [6] S. Haykin, Adaptive Filter Theory, fourth edition, Prentice-Hall, 2002. [7] E. A. Wan and R. van der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proceedings AS-SPCC, pp. 153-158, 2000. [8] F. Daum, “Nonlinear filters: beyond the Kalman filter,” IEEE Aerospace and Electronic Systems Magazine, vol. 20, no. 8, pp. 57-69, 2005. [9] L. Tan and J. Jiang, “Adaptive Volterra filters for active control of nonlinear noise processes,” IEEE Transactions on Signal Processing, vol. 49, no. 8, pp. 1667-1676, 2001. [10] G. Horvath and T. Szabo, “CMAC neural network with improved generalization property for system modeling,” in Proceedings IMTC, vol. 2, pp. 1603-1608, 2002. [11] C. M. Lin, L. Y. Chen, and D. S. Yeung, “Adaptive filter design using recurrent cerebellar model articulation controller,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1149-1157, 2010. [12] C. M. Lin and Y. F. Peng, “Adaptive CMAC-based supervisory control for uncertain nonlinear systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 2, pp. 1248-1260, 2004. [13] C. P. Hung, “Integral variable structure control of nonlinear system using a CMAC neural network learning approach,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 1, pp. 702-709, 2004. [14] J. S. Albus, “A new approach to manipulator control: The cerebellar model articulation controller (CMAC),” Journal of Dynamic Systems, Measurement, and Control, vol. 97, no. 3, pp. 220–227, 1975. [15] S. Commuri and F. L. Lewis, “CMAC neural networks for control of nonlinear dynamical systems: Structure, stability and passivity,” Elsevier, Automatica, vol. 33, no. 4, pp. 635-641, 1997. [16] Y. H. Kim and F. L. Lewis, “Optimal design of CMAC neural-network controller for robot manipulators,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 30, no. 1, pp. 22-31, 2000. [17] J. Y. Wu, “MIMO CMAC neural network classifier for solving classification problems,” Elsevier, Applied Soft Computing, vol. 11, no. 2, pp. 2326-2333, 2011. [18] Z. R. Yu, T. C. Yang, and J. G. Juang, “Application of CMAC and FPGA to a twin rotor MIMO system,” in Proceedings ICIEA, pp. 264-269, 2010. [19] C. T. Chiang and C. S. Lin, “CMAC with general basis functions,” Elsevier, Neural Networks, vol. 9, no. 7, pp. 1199-1211, 1996. [20] J. Sim, W. L. Tung, and Chai Qeuk, “FCMAC-Yager: A novel yager-inference-scheme-based fuzzy CMAC,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1394-1410, 2006. [21] W. Yu, F. O. Rodriguez, and M. A. Moreno-Armendariz, “Hierarchical fuzzy CMAC for nonlinear systems modeling,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 5, pp. 1302-1314, 2008. [22] C. M. Lin, L. Y. Chen, and D. S. Yeung, “Adaptive filter design using recurrent cerebellar model articulation controller,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1149-1157, 2010. [23] P. E. M. Almedia and M. G. Simoes, “Parametric CMAC networks fundamentals and applications of a fast convergence neural structure,” IEEE Transactions on Industry Applications, vol. 39, no. 5, pp. 1551-1557, 2003. [24] C. M. Lin, L. Y. Chen, and C. H. Chen, “RCMAC hybrid control for MIMO uncertain nonlinear systems using sliding-mode technology,” IEEE Transactions on Neural Networks, vol. 18, no. 3, pp. 708-720, 2007. [25] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012. [26] R. Collobert and J. Weston, “A unified architecture for natural language processing: deep neural networks with multitask learning,” ICML '08 Proceedings, pp. 160-167, 2008. [27] G. E. Dahl, D. Yu, L. Deng, and A. Acero, “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30-42, 2011. [28] S.B. Davis, and P. Mermelstein ,"Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences," in IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(4), pp. 357–366, 1980. [29] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp.1915-1929, 2013. [30] H. Lee, C. Ekanadham, and A. Y. Ng, “Sparse deep belief net model for visual area V2,” Advances in Neural Information Processing Systems, 2007. [31] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” The Journal of Machine Learning Research, vol. 11, pp. 3371-3408, 2010. [32] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012. [33] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436-444, 2015. [34] S. M. Siniscalchi, T. Svendsen, and C. H. Lee, “An artificial neural network approach to automatic speech processing,” Elsevier, Neurocomputing, vol. 140, pp. 326-338, 2014. [35] J.J.Shynk,”Adaptive IIR filter”,IEEE ASSP,vol.6,pp.4-21,1989 [36] N. Parihar, J. Picone, D. Pearce, and H. G. Hirsch, “Perfor- mance analysis of the Aurora large vocabulary baseline system,” in Proc. EUSIPCO, pp. 553–556, 2004. [37] G. Hirsch, “Experimental framework for the performance eval- uation of speech recognition front-ends on a large vocabulary task,” ETSI STQ Aurora DSR Working Group, 2001 [38] N. Parihar and J. Picone, “Aurora working group: DSR front end LVCSR evaluation au/384/02,” Institute for Signal and Information Processing Report, 2002. [39] E. A. Habets, “Room impulse response generator,” Tech. Rep, Technische Universiteit Eindhoven, vol. 2, pp. 1–21, 2006. [40] Yu Tsao, Shih-Hau Fang, and Yao Shiao, “Acoustic Echo Cancellation Using aVector-Space-Based Adaptive Filtering Algorithm”, IEEE Signal Processing Society, pp.351-355, 2004. [41] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, 1994.
|