|
[1] A. Hyvärinen. “Fast and Robust Fixed-Point Algorithms for Independent Component Analysis.” IEEE Transactions on Neural Networks ,Vol.10, No.3, pp.626-634, 1999. [2] B.N. Gover, J.G. Ryan, and M.R. Stinson, “Microphone array measurement system for analysis of directional and spatial variations of sound fields,” J. Acoust. Soc. Am., 112, 1980–1991 (2002). [3] B.N. Gover, J.G. Ryan, and M.R. Stinson, “Measurements of directional properties of reverberant sound fields in rooms using a spherical microphone array,” J. Acoust. Soc. Am. (in press). [4] Leukimmiatis, S., Dimitriadis, D., and Maragos, P.: ‘An optimum microphone array post-filter for speech applications’. ICSLP, 2006, pp. 2142–2145 [5] http://en.wikipedia.org/wiki/Colin_Cherry [6] Yan Li, P. Wen and D. Powers, “Methods for the blind signal separation problem,” in Proc. IEEE Int. Conf. Neural Network, Signal Processing, Nanjing China, Dec. 2003, pp. 1386-1389. [7] J. Herault and C. lutten, “Space or time adaptive signal processing by neural network models”, In J. S. Denkcr (ed), editor, Neural Nehvorks For Computing: AIP Conference Proceedings 151, American Institute for Physics, New York, 1986. [8] G. Burel, “Blind separation of sources ~ a nonlinear neural algorithm”, Neural Nehvorkr, Vol. 5, No, 6, pp. 937-947, 1992. [9] A. J. Bell and T. J. Sejnowski, “An information-maximisation approach to blind separation and blind deconvolution”, Neural Computation, Vol. 7, No. 6, 1004-1034, 1995. [10] P. Smaragdis, Information theoretic Approaches to source separation, Master’s Thesis, MIT, Cambridge, MA, 1997. [11] I. Lin, D. Grier, and J. Cowan, “Faithful representation of separable distributions”, Neural Computation, Vol. 9, pp. 1305-1320,1997. [12] F. Tordini and F. Piazza, “A semi-blind approach to the separation of real world speech mixtures” , in IJCNN'02, Vol. 2, 2002, pp. 1293–1298. [13] Aapo Hyvärinen, “Independent Component Analysis,” John Wiley, 2001. [14] Roger L.berger, George Casella ,“Statistical Inference "second edition , DUXBURY 2002. [15] T.M. Cover and J.A. Thomas, “Elements of Information Theory,” Wiley, 1991 [16] Aapo Hyvärinen, “New approximations of differential entropy for independent component analysis and projection pursuit,” Advance Neural Inform. Processing Syst. 10. MIT Press, pp.273-279, 1998 [17]Ephraim, Y. and Van Trees, H. L.: A signal subspace approach for speech enhancement. IEEE Transactions on Speech and Audio Processing. vol. 3, no. 4, pp. 251–266, July 1995 [18] A. Rezayee and S. Gazor, “An adaptive KLT approach for speech enhancement,” IEEE Trans. Speech Audio Processing, vol. 9, pp. 87-95, Feb. 2001. [19] K. Hermus, P. Wambacq, and H.V. Hamme, “A review of signal subspace speech enhancement and its application to noise robust speech recognition,” EURASIP Journal on Advances in Signal Processing, vol. 2007, pp. Article ID 45821, 15 pages, 2007. [20] J. Ramírez, J. M. Gorriz, and J. C. Segura (2007) “Voice activity detection. Fundamentals and speech recognition system robustness” In M. Grimm and K. Kroschel, editors, Robust Speech Recognition and Understanding, I-Tech, 2007. [21] Jaber Marvan, “Voice Activity detection Method and Apparatus for voiced/unvoiced decision and Pitch Estimation in a Noisy speech feature extraction”, 08/23/2007, United States Patent 20070198251. [22] Rabiner, L. R., and Schafer, R. W., Digital Processing of Speech Signals, Englewood Cliffs, New Jersey, Prentice Hall, 512-ISBN-13:9780132136037, 1978. [23] Young, S. et al. HTKbook (V3.4), Cambridge University Engineering Dept. (2006) [24] Young, S., Everman, G., Kershaw, D., Moore, G., Odell, J., Ollason, D., Valtchev, V., and Woodland P. (2001) The HTK Book 3.1. Cambridge: Entropic. [25] Taylor, P., King, S., Isard, S. and Wright, H. (1998) Intonation and Dialog Context as Constraints for Speech Recognition. In: Language and Speech, vol.41 (3-4), pp.493-512. [26] A. Varga, H.J.M Steenneken, M. Tomlinson and D. Jones. The NOISEX-92 study on the effect of additive noise on automatic speech recognition, 1992. Documentation included in the NOISEX-92 CD-ROMs. [27] R. Kuhn, F. Perronnin, P. Nguyen, J.-C. Junqua, and L. Rigazio, “Very fast adaptation with a compact context-dependent eigenvoice model,” in Proc. ICASSP, May 2001, vol. 1, pp. 373–376. [28] C. Y. Tseng, ”A phonetically oriented speech database for Mandarin Chinese,” Proc. ICPhS95, Stockholm, pp.326-329, 1995
|