|
[1] 行政院主計總處, "105 年 9 月底領有身心障礙手冊人數統計," 2017, Available: https://www.stat.gov.tw/public/Data/7120162454CA7EZUC2.pdf. [2] WHO, "Deafness and hearing loss," 2017, Available: http://www.who.int/mediacentre/factsheets/fs300/en/. [3] M. Bansal, Diseases of ear, nose and throat. JP Medical Ltd, 2012. [4] J. G. Clark, "Uses and abuses of hearing loss classification," Asha, vol. 23, no. 7, p. 493, 1981. [5] F. Chen, Y. Hu, and M. Yuan, "Evaluation of Noise Reduction Methods for Sentence Recognition by Mandarin-Speaking Cochlear Implant Listeners," Ear and hearing, vol. 36, no. 1, pp. 61-71, 2015. [6] P. Loizou, "Speech processing in vocoder-centric cochlear implants," in Cochlear and brainstem implants, vol. 64: Karger Publishers, 2006, pp. 109-143. [7] K. Nie, G. Stickney, and F.-G. Zeng, "Encoding frequency modulation to improve cochlear implant performance in noise," IEEE Transactions on Biomedical Engineering, vol. 52, no. 1, pp. 64-73, 2005. [8] M. W. Skinner, P. L. Arndt, and S. J. Staller, "Nucleus® 24 Advanced Encoder conversion study: Performance versus preference," Ear and Hearing, vol. 23, no. 1, pp. 2S-17S, 2002. [9] I. S. Kerber and I. B. U. Seeber, "Sound localization in noise by normal- hearing listeners and cochlear implant users," Ear and hearing, vol. 33, no. 4, p. 445, 2012. [10] L. S. Eisenberg et al., "Sentence recognition in quiet and noise by pediatric cochlear implant users: Relationships to spoken language," Otology & Neurotology, vol. 37, no. 2, pp. e75-e81, 2016. [11] A. Rezayee and S. Gazor, "An adaptive KLT approach for speech enhancement," IEEE Transactions on Speech and Audio Processing, vol. 9, no. 2, pp. 87-95, 2001. [12] Y. Hu and P. C. Loizou, "A generalized subspace approach for enhancing speech corrupted by colored noise," IEEE Transactions on Speech and Audio Processing, vol. 11, no. 4, pp. 334-341, 2003. [13] Y. Ephraim and D. Malah, "Speech enhancement using a minimum mean- square error log-spectral amplitude estimator," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 33, no. 2, pp. 443-445, 1985. [14] S. Kamath and P. Loizou, "A multi-band spectral subtraction method for enhancing speech corrupted by colored noise," in ICASSP, 2002, vol. 4, pp. 44164-44164: Citeseer. [15] P. Scalart, "Speech enhancement based on a priori signal to noise estimation," in Acoustics, Speech, and Signal Processing, 1996. ICASSP- 96. Conference Proceedings., 1996 IEEE International Conference on, 1996, vol. 2, pp. 629-632: IEEE. [16] G. S. Stickney, F.-G. Zeng, R. Litovsky, and P. Assmann, "Cochlear implant speech recognition with speech maskers," The Journal of the Acoustical Society of America, vol. 116, no. 2, pp. 1081-1091, 2004. [17] Y. Xu, J. Du, L.-R. Dai, and C.-H. Lee, "A regression approach to speech enhancement based on deep neural networks," IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 23, no. 1, pp. 7-19, 2015. [18] G. Hinton et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012. [19] P. Y. Simard, D. Steinkraus, and J. C. Platt, "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis," in ICDAR, 2003, vol. 3, pp. 958-962: Citeseer. [20] D. Ciregan, U. Meier, and J. Schmidhuber, "Multi-column deep neural networks for image classification," in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, 2012, pp. 3642-3649: IEEE. [21] Y. Xu, Q. Huang, W. Wang, and M. D. Plumbley, "Hierarchical learning for DNN-based acoustic scene classification," arXiv preprint arXiv:1607.03682, 2016. [22] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, "Speech enhancement based on deep denoising autoencoder," in Interspeech, 2013, pp. 436-440. [23] Y.-H. Lai, F. Chen, S.-S. Wang, X. Lu, Y. Tsao, and C.-H. Lee, "A Deep Denoising Autoencoder Approach to Improving the Intelligibility of Vocoded Speech in Cochlear Implant Simulation," IEEE Transactions on Biomedical Engineering, 2016. [24] A. Moctezuma and J. Tu, "An overview of cochlear implant systems," BIOE, vol. 414, pp. 1-20, 2011. [25] P. C. Loizou, "Introduction to cochlear implants," IEEE Engineering in Medicine and Biology Magazine, vol. 18, no. 1, pp. 32-42, 1999. [26] A. S.-L.-H. Association, "Type, Degree, and Configuration of Hearing Loss," 2015. [27] B. C. Papsin and K. A. Gordon, "Cochlear implants for children with severe-to-profound hearing loss," New England Journal of Medicine, vol. 357, no. 23, pp. 2380-2387, 2007. [28] Cochlear's- implant portfolio. Available: http://www.cochlear.com/wps/wcm/connect/au/home/discover/cochlear- implants/the-nucleus-6-system/cochlears-implant-portfolio [29] I. J. Hochmair‐Desoyer, E. S. Hochmair, and K. Burian, "DESIGN AND FABRICATION OF MULTIWIRE SCALA TYMPANI ELECTRODESa," Annals of the New York Academy of Sciences, vol. 405, no. 1, pp. 173-182, 1983. [30] M. W. Skinner et al., "Evaluation of a new spectral peak coding strategy for the Nucleus 22 Channel Cochlear Implant System," Otology & Neurotology, vol. 15, pp. 15-27, 1994. [31] M. VONDRÁŠEK, P. Sovka, and T. TICHÝ, "ACE Strategy with Virtual Channels," Radioengineering, vol. 17, no. 4, 2008. [32] A. C. S. Kam, I. H. Y. Ng, M. M. Y. Cheng, T. K. C. Wong, and M. C. F. Tong, "Evaluation of the ClearVoice strategy in adults using HiResolution fidelity 120 sound processing," Clinical and experimental otorhinolaryngology, vol. 5, no. Suppl 1, p. S89, 2012. [33] G. Clark, Cochlear implants: fundamentals and applications. Springer Science & Business Media, 2006. [34] P. P. Khing, B. A. Swanson, and E. Ambikairajah, "The effect of automatic gain control structure and release time on cochlear implant speech intelligibility," PloS one, vol. 8, no. 11, p. e82263, 2013. [35] P. J. Blamey, "Adaptive dynamic range optimization (ADRO): a digital amplification strategy for hearing aids and cochlear implants," Trends in amplification, vol. 9, no. 2, pp. 77-98, 2005. [36] F.-G. Zeng and R. V. Shannon, "Psychophysical laws revealed by electric hearing," Neuroreport, vol. 10, no. 9, pp. 1931-1935, 1999. [37] J. H. Johnson, C. W. Turner, J. J. Zwislocki, and R. H. Margolis, "Just noticeable differences for intensity and their relation to loudness," The Journal of the Acoustical Society of America, vol. 93, no. 2, pp. 983-991, 1993. [38] F.-G. Zeng and R. V. Shannon, "Loudness balance between electric and acoustic stimulation," Hearing research, vol. 60, no. 2, pp. 231-235, 1992. [39] Y.-H. Lai, Y. Tsao, and F. Chen, "Effects of adaptation rate and noise suppression on the intelligibility of compressed-envelope based speech," PloS one, vol. 10, no. 7, p. e0133519, 2015. [40] Y. Kodratoff, Introduction to machine learning. Morgan Kaufmann, 2014. [41] E. Eyob, Social Implications of Data Mining and Information Privacy: Interdisciplinary Frameworks and Solutions: Interdisciplinary Frameworks and Solutions. IGI Global, 2009. [42] S. J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345-1359, 2010. [43] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015. [44] S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, "Supervised machine learning: A review of classification techniques," ed, 2007. [45] C. E. Rasmussen and C. K. Williams, Gaussian processes for machine learning. MIT press Cambridge, 2006. [46] X. Glorot, A. Bordes, and Y. Bengio, "Deep sparse rectifier neural networks," in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011, pp. 315-323. [47] J. Malik and P. Perona, "Preattentive texture discrimination with early vision mechanisms," JOSA A, vol. 7, no. 5, pp. 923-932, 1990. [48] K. Fukushima and S. Miyake, "Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition," in Competition and cooperation in neural nets: Springer, 1982, pp. 267-285. [49] J. Schmidhuber, "Deep learning in neural networks: An overview," Neural networks, vol. 61, pp. 85-117, 2015. [50] A. Narayanan and D. Wang, "Ideal ratio mask estimation using deep neural networks for robust speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, 2013, pp. 7092-7096: IEEE. [51] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, "Ensemble modeling of denoising autoencoder for speech spectrum restoration," in INTERSPEECH, 2014, vol. 14, pp. 885-889. [52] G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015. [53] L. Muda, M. Begam, and I. Elamvazuthi, "Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques," arXiv preprint arXiv:1003.4083, 2010. [54] P. C. Loizou, Speech enhancement: theory and practice. CRC press, 2013. [55] Y. Ephraim and D. Malah, "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 32, no. 6, pp. 1109-1121, 1984. [56] J. Du and Q. Huo, "A speech enhancement approach using piecewise linear approximation of an explicit model of environmental distortions," in Ninth Annual Conference of the International Speech Communication Association, 2008. [57] J. Ma, Y. Hu, and P. C. Loizou, "Objective measures for predicting speech intelligibility in noisy conditions based on new band-importance functions," The Journal of the Acoustical Society of America, vol. 125, no. 5, pp. 3387- 3405, 2009. [58] H. Jiang, "Confidence measures for speech recognition: A survey," Speech communication, vol. 45, no. 4, pp. 455-470, 2005. [59] S. Ideas, "Sample CD: XV MP3 Series SI-XV-MP3. In.," ed, 2002. [60] L. Ma, B. Milner, and D. Smith, "Acoustic environment classification," ACM Transactions on Speech and Language Processing (TSLP), vol. 3, no. 2, pp. 1-22, 2006. [61] R. Y. Rubinstein, A. Ridder, and R. Vaisman, "Cross‐Entropy Method," Fast Sequential Monte Carlo Methods for Counting and Optimization, pp. 6-36. [62] D. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. [63] W. R. Wilson, F. M. Byl, and N. Laird, "The efficacy of steroids in the treatment of idiopathic sudden hearing loss: a double-blind clinical study," Archives of Otolaryngology, vol. 106, no. 12, pp. 772-776, 1980. [64] L. K. Holden et al., "Factors affecting open-set word recognition in adults with cochlear implants," Ear and hearing, vol. 34, no. 3, p. 342, 2013. [65] 黃銘緯, "台灣地區噪音下漢語語音聽辨測試," 2005. [66] S. Haykin, Advances in spectrum analysis and array processing (vol. III). Prentice-Hall, Inc., 1995. [67] R. V. Shannon, F.-G. Zeng, V. Kamath, J. Wygonski, and M. Ekelid, "Speech recognition with primarily temporal cues," Science, vol. 270, no. 5234, p. 303, 1995. [68] F.-G. Zeng et al., "Speech dynamic range and its effect on cochlear implant performance," The Journal of the Acoustical Society of America, vol. 111, no. 1, pp. 377-386, 2002.
|