|
1Administration for Community Living, “Profile of Older Americans,” https://acl.gov/aging-and-disability-in-america/data-and-research/profile-older-americans, 2017. 2ANSI, 1997. American National Standard: Methods for Calculation of the Speech Intelligibility Index: Acoustical Society of America. 3A. Sorvala, E. Alasaarela, H. Sorvoja and R. Myllylä, “A two-threshold fall detection algorithm for reducing false alarms,” 6th International Symposium on Medical Information and Communication Technology (ISMICT), La Jolla, CA, 2012, pp. 1-4. 4B.G. Steele, L. Holt, B. Belza, S. Ferris, S. Lakshminaryan, D.M. Bucher. “Quantitating physical activity in COPD using a triaxial accelerometer,” Chest, 2000, vol. 117, pp. 1359–1367. 5C.A. Werner, “The Older Population: 2010, Census Briefs U.S. Bureau of the Census”, 2010, http://www.census.gov/prod/cen2010/briefs/c2010br-09.pdf. 6Centers for Disease Control and Prevention, “Home and Recreational Safety,” https://www.cdc.gov/homeandrecreationalsafety/falls/adultfalls.html, 2018. 7Chai, L., Du, J. and Wang, Y. N, 2017. Gaussian Density Guided Deep Neural Network for Single-Channel Speech Enhancement. IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), Tokyo, pp. 1-6. 8Chen, Z., Watanabe, S., Erdogan, H., and Hershey, J. R., 2015. Speech enhancement and recognition using multi-task learning of long short-term memory recurrent neural networks. Proc. Interspeech, pp. 3274-3278. 9Chris, D., Bo, L., and Rohit P., 2018. Exploring Speech Enhancement with Generative Adversarial Networks for Robust Speech Recognition. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Canada, SP-L10.2, arxiv 1803.10132v2. 10C.J. Caspersen, K.E. Powell, and G.M. Christenson, “Physical activity, exercise and physical fitness: Definitions and distinctions for health-related research,” Public Health Rep. 1985, vol. 110, pp. 126-131. 11C.M. Cheng, Y.L. Hsu, and C.M. Young, “Development of a Portable System for Physical Activity Assessment in a Home Environment,” Telemedicine Journal and E-Health, 2008, vol. 14, pp. 1044-1056. 12C. Wang et al., “Development of a Fall Detecting System for the Elderly Residents,” 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, 2008, pp. 1359-1362. 13Daniel, M., and Tan, Z.-H., 2017, Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification, Interspeech, pp. 2008-2012. 14Dekens, T., Verhelst, W., Capman, F., and Beaugendre, F., 2010. Improved speech recognition in noisy environments by using a throat microphone for accurate voicing detection. Proc. EUSIPCO, pp. 1978-1982. 15D. E. O''Leary, "Artificial Intelligence and Big Data," in IEEE Intelligent Systems, vol. 28, no. 2, pp. 96-99, March-April 2013. doi: 10.1109/MIS.2013.39 16D. Lim, C. Park, N.H. Kim, S.H. Kim, and Y.S. Yu, “Fall-Detection Algorithm Using 3-Axis Acceleration: Combination with Simple Threshold and Hidden Markov Model,” Hindawi Publishing Corporation Journal of Applied Mathematics, 2014, vol. 2014, pp. 8, Article ID 896030. 17Donahue, C., Li, B., and Prabhavalkar, R., 2018, Exploring speech enhancement with generative adversarial networks for robust speech recognition, Proc. ICASPP. 18E. Casilari, J. A. Santoyo-Ramón, and J. M. Cano-García, “UMAFall: A multisensor dataset for the research on automatic fall detection,” Procedia Computer Science, 2017, vol. 110, pp. 32–39. 19Erdogan, H., Hershey, J. R., Watanabe, S., and Le Roux, J., 2015, Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks, Proc. ICASSP, pp. 708-712. 20F. Gemperle, C. Kasabach, J. Stivoric, M. Bauer, R. Martin, “Design for wearability,” In Proceedings of the 2nd IEEE Symposium on Wearable Computers, Pittsburg, 1998, pp. 116-122. 21F. Hossain, M. L. Ali, M. Z. Islam and H. Mustafa, “A direction-sensitive fall detection system using single 3D accelerometer and learning classifier,” 2016 International Conference on Medical Engineering, Health Informatics and Technology (MediTec), Dhaka, 2016, pp. 1-6. 22Flanagan, J. L., 2013. Speech Analysis Synthesis and Perception, Ed. 3. Springer Science and Business Media, Germany Berlin. 23Fu, S. W., Hu, T. Y., Tsao, Y., and Lu, X., 2017. Complex spectrogram enhancement by convolutional neural network with multi-metrics learning. Proc. MLSP. 24Fu, S. W., Tsao, Y., and Lu, X., 2016. SNR-aware convolutional neural network modeling for speech enhancement. Proc. Interspeech, pp. 3768-3772. 25Fu, S. W., Wang, T. W., Tsao, Y., Lu, X., and Kawai H., 2018. End-to-end waveform utterance enhancement for direct evaluation metrics optimization by fully convolutional neural networks. IEEE/ACM Transactions on Audio, Speech, and Language, vol. 26, no. 9, pp. 1570-1584. 26Glorot, X., Bordes, A., and Bengio, Y., 2011. Deep sparse rectifier neural networks. Proc. AISTATS, pp. 315-323. 27Google, 2017. Cloud Speech API,” https://cloud.google.com/speech/. 28Graciarena, M., Franco, H., Sonmez, K., and Bratt, H., 2003. Combining standard and throat microphones for robust speech recognition. IEEE Signal Process. Lett., 10(3), 72-74. 29G. Vavoulas, M. Pediaditis, E.G. Spanakis, M. Tsiknakis, “The MobiFall dataset: An initial evaluation of fall detection algorithms using smartphones,” In: Proceedings of the IEEE 13th International Conference on Bioinformatics and Bioengineering (BIBE), Chania, 2013, p. 1–4. 30Haykin, S., 1995. Advances in spectrum analysis and array processing, 3. Pren-tice-Hall, NJ Upper Saddle River. 31Huang, M. W., 2005. Development of Taiwan Mandarin hearing in noise test. Master thesis, Department of speech language pathology and audiology, National Taipei University of Nursing and Health Sciences. 32Hussain, T., Siniscalchi, S. M., Lee, C. C., Wang, S. S., Tsao, Y., and Liao, W. H., 2017, Experimental study on extreme learning machine applications for speech enhancement. IEEE Access, 5, 25542-25554. 33Huang, Z., Li, J., Siniscalchi, S. M., Chen, I. F., Wu, J., and Lee, C. H., 2015. Rapid adaptation for deep neural networks through multi-task learning. Proc. Inter-speech. 34J. Zakir, T. Seymour, and K. Berg, “Big data analytics,” Issues in Information Systems, 2015, vol. 16, iss. 2, pp. 81-90. 35K.M. Diaz, D.J. Krupka, M.J. Chang, J. Peacock, Y. Ma, J. Goldsmith, et al. “Fitbit®: An accurate and reliable device for wireless physical activity tracking,” International Journal of Cardiology, 2015, vol. 185, pp. 138–140. 36Kolbœk, M., Tan, Z. H., and Jensen, J., 2016. Speech enhancement using long short-term memory based recurrent neural networks for noise robust speaker verification. Proc. SLT, pp. 305-311. 37Kuo, H. H., Yu, Y. Y., and Yan, J. J., 2015. The bone conduction microphone parameter measurement architecture and its speech recognition performance analysis. Proc. JIMET, pp. 137-140. 38Lai, Y. H., et al., 2015. Effects of adaptation rate and noise suppression on the intelligibility of compressed-envelope based speech. PloS one, 10, e0133519. 39Lai, Y. H., et al., 2017. A deep denoising autoencoder approach to improving the intelligibility of vocoded speech in cochlear implant simulation. IEEE Transactions on Biomedical Engineering, 64(7), 1568-1578. 40Lai, Y. H., et al., 2018. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients. Ear and Hearing. 41L. Hutchison, C. Hawes, and L. Williams, “Access to Quality Health Service in Rural Areas—Long-Term Care,” Rural Healthy People 2010: A companion document to healthy people 2010, Vol. 3, The Texas A&M University System Health Science Center, School of Rural Public Health, Southwest Rural Health Research Center, College Station, TX, 2010, pp. 1-28. 42Li, J., Deng, L., Gong, Y., and Haeb-Umbach, R., 2014. An overview of noise-robust automatic speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(4), pp. 745-777. 43Liu, Z., Zhang, Z., Acero, A., Droppo, J., and Huang, X. D., 2004. Direct filtering for air- and bone-conductive microphones. Proc. MMSP, pp. 363-366. 44Loizou, P. C., 2007. Speech Enhancement: Theory and Practice. CRC Press, Florida Boca Raton. 45Lu, X., Tsao, Y., Matsuda, S., and Hori, C., 2013. Speech enhancement based on deep denoising autoencoder. Proc. Interspeech, pp. 436-440. 46Lu, X., Tsao, Y., Matsuda, S., and Hori, C., 2014. Ensemble modeling of denoising autoencoder for speech spectrum restoration. Proc. Interspeech, pp. 885-889. 47L.Y. Zhu, P. Zhou, A.L. Pan, J. Guo, W. Sun, X. H. Chen, Z. Liu , and L. Wang, “A Survey of Fall Detection Algorithm for Elderly Health Monitoring,” IEEE Fifth International Conference on Big Data and Cloud Computing, Dalian, 2015, pp. 270-274. 48Martens, J., 2010. Deep learning via Hessian-free optimization. Proc. ICML, pp. 735-742. 49Meng, Z., Li, J., Gong, Y., and Juang, B.-H., 2018. Adversarial teacher-student learn-ing for unsupervised domain adaptation. Proc. ICASSP. 50M.E. Rida, F. Liu,Y. Jadi, A.A.A. Algawhari, and A. Askourih, “Indoor Location Position Based on Bluetooth Signal Strength,” 2nd International Conference on Information Science and Control Engineering, Shanghai, 2015, pp. 769-773. 51Mimura, M., Sakai, S., and Kawahara, T., 2017, Cross-domain speech recognition using nonparallel corpora with cycle-consistent adversarial networks. Proc. ASRU. 52Odelowo, B. O., and Anderson, D. V., 2017, Speech enhancement using extreme learning machines. Proc. WASPAA, pp. 200-204. 53P. Pierleoni, A. Belli, L. Palma, M. Pellegrini, L. Pernini and S. Valenti, “A High Reliability Wearable Device for Elderly Fall Detection,” IEEE Sensors Journal, 2015, vol. 15, no. 8, pp. 4544-4553. 54P. Vallabh, R. Malekian, N. Ye and D. C. Bogatinoska, “Fall detection using machine learning algorithms,” 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, 2016, pp. 1-9. 55Santiago, P., Antonio, B., and Joan, S., 2017, SEGAN: Speech enhancement genera-tive adversarial network. Interspeech, pp. 3642-3646. 56S.B. Khojasteh, J.R. Villar, C. Chira, V.M. González, and E. de la Cal, “Improving Fall Detection Using an On-Wrist Wearable Accelerometer,” Sensors 2018, 18, 1350, doi:10.3390/s18051350. 57Shimamura, T., Mamiya, J., and Tamiya, T., 2006. Improving bone-conducted speech quality via neural network. Proc. ISSPIT, pp. 628-632. 58Shimamura, T. and Tomikura, T., 2005. Quality improvement of bone-conducted speech. Proc. ECCTD, pp. 1-4. 59Shivakumar, P. G. and Georgiou, P. G., 2016. Perception optimized deep denoising autoencoders for speech enhancement. Proc. Interspeech, pp. 3743-3747. 60S. Kajioka, T. Mori, T. Uchiya, I. Takumi, and H. Matsuo, “Experiment of indoor position presumption based on RSSI of Bluetooth LE beacon,” IEEE 3rd Global Conference on Consumer Electronics (GCCE), Tokyo, 2014, pp. 337-339. 61S. F. Hossain, M. Z. Islam and M. L. Ali, “Real time direction-sensitive fall detection system using accelerometer and learning classifier,” 4th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, 2017, pp. 99-104. 62S.M. Cheer and A.J. Wagstaff, “Quetiapine. A review of its use in the management of schizophrenia,” CNS Drugs, 2004, vol. 18 iss. 3, pp. 173-199. 63Sun, L., Du, J., Dai, L.-R., and Lee, C.-H., 2017, Multiple-target deep learning for LSTM-RNN based speech enhancement. Proc. HSCMA, pp. 136-140. 64Taal, C. H., Hendriks, R. C., Heusdens, R., and Jensen, J., 2011. An algorithm for intelligibility prediction of time–frequency weighted noisy speech. IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2125-2136. 65Tajiri, Y., Kameoka, H., and Toda, T., 2017. A noise suppression method for body-conducted soft speech based on non-negative tensor factorization of air- and body-conducted signals. Proc. ICASSP, pp. 4960-4964. 66Thang, T. V., Kimura, K., Unoki, M., and Akagi, M., 2006. A study on restoration of bone-conducted speech with MTF-based and LP-based models. Journal of Signal Processing, pp. 407-417. 67T.R. Mauldin, M.E. Canby, V. Metsis, A.H.H. Ngu, and C.C. Rivera, “SmartFall: A Smartwatch-Based Fall Detection System Using Deep Learning,” Sensors 2018, 18, 3363, doi:10.3390/s18103363. 68U. Lindemann, A. Hock, M. Stuber, W. Keck, and C. Becker, “Evaluation of a fall detector based on accelerometers: a pilot study,” Medical & Biological Engineering & Computing, New York, 2005, vol. 43, pp. 1146–1154. 69van Hoesel, R., et al. , 2005. Amplitude-mapping effects on speech intelligibility with unilateral and bilateral cochlear implants. Ear and Hearing, 26, 381-388. 70Wand, M. and Schmidhuber, J. (2017). Improving speaker-independent lipreading with domain-adversarial training. arXiv preprint arXiv:1708.01565. 71Wang, D., and Chen, J., 2017, Supervised speech separation based on deep learning: an overview. arXiv preprint arXiv:1708.07524. 72Wang, Q., Rao, W., Sun, S., Xie, L., Chng, E. S., and Li, H., 2018, Unsupervised do-main adaptation via domain adversarial training for speaker recognition, Proc. ICASSP. 73Wang, Y. and Wang, D., 2012, Cocktail party processing via structured prediction, Proc. NIPS, pp. 224-232. 74Wang, Y., Narayanan, A., and Wang, D., 2014. On training targets for supervised speech separation. IEEE/ACM Trans. Audio, Speech, Language Process. 22(12), pp. 1849-1858. 75Weninger, F., Erdogan, H., Watanabe, S., et al., 2015, Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR. Proc. LVA/ICA, pp. 91-99. 76Wikipedia, 2019. " Autoencoder," https://en.wikipedia.org/wiki/ Autoencoder. 77Wikipedia, 2019. "Restricted Boltzmann Machine," https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine. 78Wikipedia, 2019. "Wearable_Tecnhology," https://en.wikipedia.org/wiki/ Wearable_Tecnhology. 79Xia, B. and Bao, C., 2014. Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification. Speech Communication, 60, pp. 13-29. 80Xu, Y., Du, J., Dai, L.R., Lee, C.H., 2014. An experimental study on speech en-hancement based on deep neural networks. IEEE Signal Process. Lett. 21 (1), pp. 65–68. 81Xu, Y., Du, J., Dai, L.-R., and Lee, C.-H., 2015, A regression approach to speech en-hancement based on deep neural networks, IEEE/ACM Trans. Audio, Speech, Language Process. 23, pp. 7-19. 82X. Wu, V. Kumar, J. Ross Quinlan and et al, “Top 10 algorithms in data mining,” Knowledge Information System, 2008, vol. 14, pp 1-37. https://doi.org/10.1007/s10115-007-0114-2. 83Zhang, Z., Liu, Z., Sinclair, M., Acero, A., Deng, L., Droppo, J., Huang, X. D., and Zheng, Y., 2004. Multi-sensory microphones for robust speech detection, en-hancement, and recognition. Proc. ICASSP, pp. 781-784. 84Zheng, Y., Liu, Z., Zhang, Z., Sinclair, M., Droppo, J., Deng, L., Acero, A., and Huang, X. D., 2003. Air- and bone-conductive inte-grated microphones for robust speech detection and enhancement. Proc. ASRU, pp. 249-254. 85Z. Jianyong, L. Haiyong, C. Zili, and L. Zhaohui, “RSSI based Bluetooth low energy indoor positioning,” International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, 2014, pp. 526-533.
|