|
[1] L. C. Parra and C. V. Alvino, “Geometric source separation: Merging convolutive source separation with geometric beamforming,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 10, no.6, Sep. 2002. [2] E. Vincent, “Musical source separation using time-frequencysource priors,”IEEE Trans. on Audio, Speech, and Language Processing, vol. 14, no. 1, Jan. 2006. [3] S. Ukai, H. Saruwatari, T. Takatani, R. Mukai, and H. Sawada., “Multistage simo-model-based blind source separation combining frequency-domain ica and time-domain ica,” in IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings (ICASSP), volume 4, May 2004, pp. iv–109 – iv–112. [4] H.-M. Park, C. S. Dhir, D.-K. Oh, and S.-Y. Lee, “Filterbank-based blind signal separation with estimated sound direction," in IEEE International Symposium on Circuits and Systems (ISCAS), volume 6, May 2005, pp. 5874 – 5877. [5] T. Nishiura, T. Yamada, S. Nakamura, and K. Shikano, “Localization of multiple sound sources based on csp analysis with a microphone array,” in IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings(ICASSP), volume 2, Jun. 2000, pp. 1053–1056. [6] J.-T. Chien, J.-R. Lai, and P.-Y. Lai, “Microphone array signal processing for far-talking speech recognition,” in IEEE Third Workshop on Signal Processing Advances in Wireless Communications (SPAWC)., number 20-23, Mar. 2001, pp.322–325. [7] D. Y., N. T., and K. H. I. T., “A design of audio-visual talker tracking system based on csp analysis and frame difference in real noisy environments,” in IEEE 6th Workshop on Multimedia Signal Processing, Sep-Oct. 2004, pp. 63–66. 29 Sept.-1 Oct. 2004. [8] J.-M. Valin, F. Michaud, J. Rouat, and D. Letourneau, “Robust sound source localization using a microphone array on a mobile robot,” in International Conference on Intelligent Robots and Systems Proceedings (IROS), volume 2, Oct. 2003, pp. 1228–1233. [9] R. . SCHMIDT, “Multiple emitter location and signal parameter estimation,”IEEE Trans. on Audio, Speech, and Language Processing, vol. 34, no. 3, pp. 276–280, Mar. 1986. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. AP-34, NO. 3, MARCH 1986. [10] D. Giuliani, M. Matassoni, and M. Omologo, “Hands free continuous speech reconnition in noisy environment using a four microphone array,” in Acoustics, Speech, and Signal Processing (ICASSP), volume 1, May 1995, pp. 860–863. [11] C. Avendano and J.-M. Jot, “Ambience extraction and synthesis from stereo signals for multi-channel audio up-mix,” in IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings (ICASSP)., volume 2, Sep 2002, pp. 1957 – 1960. [12] R. Dressler, “Dolby surround pro logic ii decoder principles of operation,”http://www.dolby.com/assets/pdf/tech library/209 Dolby Surround Pro Logic II Decoder Principles of Operation.pdf. [13] H. F. Silverman, W. R. Patterson, and J. L. Flanagan, “The huge microphone array,” IEEE Concurrency, vol. 6, no. 4, pp. 36–46, Oct-Dec. 1998. [14] C. Knapp and G. Carter, “The generalized correlation method for estimation of time delay,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 24, pp. 320–327, Aug. 1976. [15] S. Haykin, Advances in Specrtum Analysis and Array Processing, volume II. Prentice-Hall, 1991. [16] Y. Tamai, S. Kagami, and H. Mizoguchi, “Circular microphone array for meeting system,” in IEEE Sensors Proceedings, volume 2, Oct. 2003, pp. 1100–1105. [17] C.-M. Chang and C.-H. Peng, “Applying the filtered back-projection method to extract signal at specific position,” in NCS Proceedings, Dec. 2005. [18] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Prentice Hall, 1992. [19] J. H. McClellan, R. W. Schafer, and M. A. Yoder, DSP FIRST: A multimedia approach. Prentice
|