|
[1]S. Furui, Digital Speech Processing, Synthesis, and Recognition, Marcel Dekker Inc, February 10, 1989 [2]N. Sebe, I. Cohen, T. Gevers, T.S. Huang, “Multimodal Approaches for Emotion Recognition: A Survey,” Proceedings of SPIE, Vol. 5670, pp. 56-67, January 2005 [3]馮觀富, 情緒心理學, 心理出版社, 2005 [4]Encyclopedia Britannica Online, http://www.britannica.com/ [5]D. Morrison, R. Wang, L.C. De Silva, W.L. Xu, “Real-time Spoken Affect Classification and its Application in Call-Centers,” Proceedings of the Third International Conference on Information Technology and Applications, Vol. 1, pp. 483-487, July 2005 [6]L. Vidrascu, L. Devillers, “Annotation and Detection of Blended Emotions in Real Human-Human Dialogs Recorded in a Call Center,” IEEE International Conference on Multimedia and Expo, pp. 944 – 947, July 2005 [7]C. Breazeal, “Emotive qualities in robot speech,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, pp.1388- 394, 2001 [8]http://www.ai.mit.edu/projects/humanoid-robotics-group/index.html [9]B. Schuller, G. Rigoll, M. Lang, “Speech Emotion Recognition Combining Acoustic Features and Linguistic Information in a Hybrid Support Vector Machine-Belief Network Architecture,” Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. 577-80, May 2004 [10]Y.M. Chen, “Investigating and Finding Meaningful Use Scenarios for Emotion-Aware Technologies,” 2006, http://www.iis.sinica.edu.tw/~kevinc/ [11]A. Ortony, T.J. Turner, “What's Basic about Basic Emotions,” Psychological Review, pp. 315-331, 1990 [12]D. Canamero, J. Fredslund, “I Show You How I Like You: Human-Robot Interaction through Emotional Expression and Tactile Stimulation,” http://www.daimi.au.dk/~chili/feelix/feelix.html, May 30, 2006 [13]http://changingminds.org/explanations/emotions/basic%20emotions.htm, May 30, 2006 [14]R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, J.G. Taylor, “Emotion Recognition in Human-Computer Interaction,” IEEE Signal Processing Magazine, Vol.18(1), pp.32 – 80, Jan 2001 [15]R. Tato, R. Santos, R. Kompe, J.M. Pardo, “Emotional Space Improves Emotion Recognition,” ICSLP, pp. 2029-2032, 2002 [16]J.H. Yeh, Emotion Recognition from Mandarin Speech Signals, Master Thesis, Tatung University, 2004 [17]J. Liscombe, J. Venditti, J. Hirschberg, “Classifying subject ratings of emotional speech using acoustic features,” Proceedings of EuroSpeech, Geneva, Switzerland ISCA Archive 8th European Conference on Speech Communication and Technology Geneva, Switzerland, pp. 725-728, September, 2003 [18]R. Cowie and E. Douglas-Cowie, “Automatic statistical analysis of the signal and prosodic signs of emotion in speech,” Proc. 4th Int. Conf. Spoken Language Processing, pp. 1989-1992, 1996 [19]F. Dellaert, T. Polzin, A. Waibel, “Recognizing Emotion in Speech,” ICSLP Proceedings of Fourth International Conference on Spoken Language, Vol. 3, pp. 1970-1973, Oct. 1996 [20]M.W. Bhatti, Y. Wang, L. Guan, “A neural network approach for human emotion recognition in speech,” Proceedings of the International Symposium on Circuits and Systems, Vol. 2, pp. 181-184, May 2004 [21]D. Ververidis, C. Kotropoulos, I. Pitas, “Automatic emotional speech classification,” IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. 593-596, May 2004 [22]Z.J. Chuang, C.H. Wu, “Emotion recognition using acoustic features and textual content,” IEEE International Conference on Multimedia and Expo, Vol. 1, pp. 53-56, June 2004 [23]C.M. Lee, S.S. Narayanan, “Toward Detecting Emotions in Spoken Dialogs,” IEEE Transactions on Speech and Audio Processing, VOL. 13, pp. 293-303, MARCH 2005 [24]S. Davis, P. Mermelstein, “Comparison of Parametric Representations for Monosyllabic Word recognition in Continuously Spoken Sentences,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 28, pp. 357-366, Aug 1980 [25]T.L. Nwe, S.W. Foo, L.C. De Silva, “Detection of Stress and Emotion in Speech Using Traditional and FFT Based Log Energy Features,” Proceedings of the Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, Vol. 3, pp. 1619-1623, Dec. 2003 [26]D.N. Jiang, L.H. Cai , “Speech emotion classification with the combination of statistic features and temporal features,” IEEE International Conference on Multimedia and Expo, Vol. 3, pp. 1967-1970, June 2004 [27]J.J. Lu, Construction and Testing of a Mandarin Emotional Speech Database and Its Application, Master Thesis, Tatung University, 2004 [28]Y.H. Chang, Emotion Recognition and Evaluation of Mandarin Speech Using Weighted D-KNN Classification, Master Thesis, Tatung University, 2005 [29]O. Segawa, K. Takeda, F. Itakura, “Continuous speech recognition without end-point detection,” IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp.245-248, May 2001 [30]V.K. Prasad, T. Nagarajan, H.A Murthy, “Continuous speech recognition using automatically segmented data at syllabic units,” 2002 6th International Conference on Signal Processing, Vol. 1, pp.235-238, Aug. 2002 [31]V.A. Petrushin, “Emotion in Speech: Recognition and Application to Call Centers,” Proceedings of the Conference on Artificial Neural Networks in Engineering, pp. 7-10, Nov. 1999 [32]L. Lu, D. Liu, H.J. Zhang “Automatic Mood Detection and Tracking of Music Audio Signals,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 14, pp. 5-18, Jan. 2006 [33]A. M. Kondoz, Digital Speech: Coding for Low Bit Rate Communication Systems, John Wiley & Sons, 1994 [34]R. Gutierrez-Osuna, “Pattern Analysis for Machine Olfaction: A Review,” IEEE Sensors Journal, Vol. 2, pp. 189-202, June 2002 [35]T.L. Pao, Y.T. Chen, J.J. Lu and J.H. Yeh, “The Construction and Testing of a Mandarin Emotional Speech Database,” Proceeding of ROCLING XVI, pp. 355-363, Sep. 2004 [36]王小川, 語音訊號處理, 全華科技圖書, 2004
|