|
[1]S. F Carbaugh, "Understanding shaken baby syndrome, " Adv Neonatal Care, 2004, Available:http://www.medscape.com/viewarticle/478153_3. [2]Crying in infancy, Available: http://pennstatehershey.adam.com/content.aspx?productId=112&pid=1&gid=002397. [3]M. Silva, B. Mijovic, Bea R.H. Van den Bergh, K. Allegaert, J.M. Aerts, S. Van Huffel, and D. Berckmans, "Decoupling between fundamental frequency and energy envelope of neonate cries," Early Human Development, vol. 86, pp. 35-40, 2010. [4]萊恩。《嬰幼兒發展》。五南圖書出版。 [5]王小川(2008)。《語音訊號處理》。臺北:全華圖書。 [6]D. Huron, "The ramp archetype and the maintenance of passive auditory attention," Music Perception, vol. 10, pp. 83-91, 1992. [7]J. S. R. Jang. Audio Signal Processing and Recognition, Available: http://mirlab.org/jang/books/audioSignalProcessing/. [8]S. Z. Li, "Content-based audio classification and retrieval using the nearest feature line method," IEEE Transactions on Speech and Audio Processing, vol. 8, no. 5, pp. 619-625, 2000. [9]M. Liu and C. Wan, "A study on content-based classification and retrieval of audio database," International Symposium on Database Engineering and Applications, pp. 339-345, 2001. [10]R. J. Mammone, Z. Xiaoyu, and R. P. Ramachandran, "Robust speaker recognition: a feature-based approach," IEEE Signal Processing Magazine, vol. 13, no. 5, pp. 58-71, 1996. [11]J. W. Cooley, and J. W. Tukey, "An algorithm for the machine calculation of complex Fourier series," Mathematics of Computation, vol. 19, pp. 297-301, 1965. [12]王士元、彭剛(2007)。《語言、與音與技術》。香港城市大學出版社。 [13]陳用佛、鄒濬智、沈文聖(2013)。《破案關鍵: 指紋、毛髮、血液、DNA,犯罪現場中不可不知的鑑識科學》。獨立作家出版。 [14]KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. " Imagenet classification with deep convolutional neural networks, " Advances in neural information processing systems, pp. 1097-1105, 2012 [15]張斐章、張麗秋(2007)。《類神經網路》。臺灣東華書局股份有限公司。 [16]Crying in infancy, Available: http://www.bnext.com.tw/article/view/id/38923。 [17]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition, " Proceedings of the IEEE, vol. 86, pp.2278–2324, 1998. [18]http://www.wikiwand.com/zh-hk/%E5%8D%B7%E7%A7%AF%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C。 [19]Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Sneddon, Ilya Sutskever, Ruslan Salakhutdinov . " Dropout: A Simple Way to Prevent Neural Networks from Overfitting," Journal of Machine Learning Research archive, vol.15 Issue 1, pp.1929-1958, 2014. [20]Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I.,and Salakhutdinov, R. " Improving neural networks by preventing co-adaptation of feature detectors, " Computing Research Repository, pp.1207.0580, 2012. [21]Crying in infancy, Available: https://penolove.gitbooks.io/deep-learning-tutourial/content/Supervise%20learning/softmax%20regression.html。 [22]M. A. Ruíz Díaz, C. A. Reyes García, L. C. Altamirano Robles, J. E. Xalteno Altamirano, and A. Verduzco Mendoza, "Automatic infant cry analysis for the identification of qualitative features to help opportune diagnosis, "Biomedical Signal Processing and Control, vol. 7, pp. 43-49, 2012. [23]Campbell, William M., Douglas E. Sturim, and Douglas A. Reynolds. "Support vector machines using GMM supervectors for speaker verification." Signal Processing Letters, IEEE, pp.308-311, 2006. [24]Abdulla, Waleed H., David Chow, and Gary Sin. "Cross-words reference template for DTW-based speech recognition systems," Conference on Convergent Technologies for the Asia-Pacific Region, vol. 4. IEEE, 2003. [25]Chang, C. Y., Hsiao, Y. C., & Chen, S. T. "Application of Incremental SVM Learning for Infant Cries Recognition," Network-Based Information Systems, pp. 607-610, 2015. [26]P. Dhanalakshmi, S. Palanivel, and V. Ramalingam, "Classification of audio signals using SVM and RBFNN, " Expert Systems with Applications, vol. 36, no. 3, pp. 6069-6075, 2009. [27]Y. H. Yang, Y. C. Lin, Y. F. Su, and H. H. Chen, "A regression approach to music emotion recognition," IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 2, pp. 448-457, 2008. [28]D. C. Park, "Classification of audio signals using fuzzy c-means with divergence-based kernel," Pattern Recognition Letters, vol. 30, no. 9, pp. 794-798, 2009. [29]T. Li and M. Ogihara, "Toward intelligent music information retrieval," IEEE Transactions on Multimedia, vol. 8, no. 3, pp. 564-574, 2006. [30]G. Tzanetakis and P. Cook, "Musical genre classification of audio signals," IEEE Transactions on Speech and Audio Processing, vol. 10, no. 5, pp. 293-302, 2002. [31]H. Lee, P. Pham, Y. Largman and A. Ng. " Unsupervised Feature Learning for Audio Classification Using Convolutional Deep Belief Networks," In Proc. Neural Information and Processing System, 2009 [32]Graves, Alex, and Navdeep Jaitly. "Towards end-to-end speech recognition with recurrent neural networks," Proceedings of the 31st International Conference on Machine Learning, vol. 14, pp. 1764-1772, 2014. [33]DENG, Li, et al. " Binary coding of speech spectrograms using a deep auto-encoder, " In: Interspeech, pp. 1692-1695, 2010.. [34]Chang, C. Y., Chang, C. W., Kathiravan, S., Lin, C., & Chen, S. T. "DAG-SVM based infant cry classification system using sequential forward floating feature selection," Multidimensional Systems and Signal Processing, 1-16,2016 [35]R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection, "Proceedings of the 14th international joint conference on Artificial intelligence, pp. 1137-1143, 1995.
|