[1]R. Cowie, et al., “Emotion recognition in human-computer interaction,” IEEE Signal Processing Magazine, vol. 18, no. 1, Jan. 2001,pp. 32-80.
[2]C.-H. Huang, C.-H. Tsai and B.-Y. Li, “The Corpus Preparation and Effective Feature Representation of Emotional Speech,” Proceedings of Fourth International Conference on Innovative Computing, Information and Control, 2009.
[3]N. Brenner and C. Rader, “A New Principle for Fast Fourier Transformation”, IEEE Trans. Acoust., Speech, Signal Processing, No. 24, 1976, pp. 264-266.
[4]Lawrence Rabiner and Biing-Hwang Juang, Fundamentals of Speech Recognition, Prentice Hall PTR, 1993.
[5]B. Schuller, G. Rigoll, and M. Lang, “Hidden Markov Model-Based Speech Emotion Recognition,” Proceedings of The 28th International Conference on Acoustics, Speech, and Signal Processing, vol. II, 2003, pp. 1-4.
[6]H. Ney,“The use of a one-stage dynamic programming algorithm for connected word recognition,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, Apr.1984, pp. 263-271.
[7]Abe, M., Nakamura, S., Shikano, k., and Kuwabara, H., “Voice Conversion through Vector Quantization,” Proceedings of The 13rd International Conference on Acoustics, Speech, and Signal Processing, 1988, pp. 655-658.
[8]N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete Cosine Transform”, IEEE Trans. Computers, Jan. 1974, pp. 90-93.
[9]S.B. Davis, and P. Mermelstein, “Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences,” IEEE Trans. Acoust., Speech, Signal Processing, Vol. 28, No. 4, 1980, pp. 357–366.
[10]M. Weintraub,“Keyword-Spotting Using SRI''s DecipherTM Large- Vocabulary Speech Recognition System,” Proceedings of The 18th International Conference on Acoustics, Speech and Signal Processing, Minneapolis, Minnesota, April 1993, pp. 463-466.
[11]張柏雄,“中文語音情緒之自動辨識”,碩士論文,國立成功大學工程科學系, 2002。[12]廖香娟,“強健性發音表示集及狀態分享式決策樹之產生,”碩士論文,國立成功大學資訊工程研究所, 2000.