[1].S. Han, H. Mao, and W. J. Dally. (2016). Deep compression: Compressing
deep neural networks with pruning, trained quantization and Huffman coding [Online].
Available: https://arxiv.org/pdf/1510.00149.pdf
[2].K. Kobayashi and T. Toda, “Sprocket: open-source voice conversion software,” 2018
[3].R. L. MacDonald, et al., “Disordered speech data collection: lessons learned at 1 million utterances from project Euphonia,” in Proc. INTERSPEECH, 2021
[4].J. R. Green, et al., “Automatic speech recognition of disordered speech: personalized models outperforming human listeners on short phrases,” in Proc. INTERSPEECH, 2021
[5].T. -J. Lin et al., "A 40nm CMOS SoC for Real-Time Dysarthric Voice Conversion of Stroke Patients," in Proc. ASP-DAC, 2022, pp. 7-8.
[6].Y. H. Lai, et al., “A deep-learning-based voice conversion system for dysarthria speakers,” in Proc. ASHA, 2018
[7].吳政憲,2020。適用於構音異常語音轉換之低功耗神經網路加速器。碩士論文。嘉義,中正大學電機工程研究所[8].王紹宇,2020。基於微控制器之構音異常語音轉換SoC。碩士論文。嘉義,中正大學電機工程研究所[9].A. Zermini, et al., “Binaural and log-power spectra features with deep neural networks for speech-noise separation,” in Proc. MMSP, 2017
[10].M. Huang, “Development of taiwan mandarin hearing in noise test,” Department of speech language pathology and audiology, National Taipei University of Nursing and Health Science, 2005.
[11].D. Arthur, S. Vassilvitskii, “k-means++: The advantages of careful seeding," in
Proc. Symp. Discrete Algorithms, 2007.
[12].C. H. Taal, R. C. Hendriks, R. Heusdens, and J. Jensen, “An algorithm for intelligibility prediction of time frequency weighted noisy speech,” in Proc. IEEE Trans. Audio, Speech, Lang, 2011.
[13]. MX25U1635E DATASHEET [Online].
Available: https://datasheetspdf.com/pdffile/792586/MACRONIX/MX25U1635E/1
[14].S. Seo and J. Kim, "Hybrid Approach for Efficient Quantization of Weights in Convolutional Neural Networks," in Proc. BigComp, 2018, pp. 638-641.
[15].Lei, Wang et al. “Compressing Deep Convolutional Networks Using K-means Based on Weights Distribution.” IIP'17 (2017).
[16].Y. Gong, L. Liu, M. Yang and L. Bourdev, “Compressing deep convolutional
networks using vector quantization,” in Proc. ICLR, 2015.
[17].E. Dupuis, D. Novo, I. O’Connor and A. Bosio, "Sensitivity Analysis and Compression Opportunities in DNNs Using Weight Sharing," in Proc. DDECS, 2020, pp. 1-6.
[18].S. Han, J. Pool, J. Tran, and W. J. Dally. (2015). Learning both weights and connections for efficient neural networks [Online].
Available: https://arxiv.org/pdf/1506.02626.pdf
[19].徐瑋程,2021。深層網路硬體之零值計算省略方法研究。碩士論文。嘉義,中正大學資訊工程研究所[20].Van Leeuwen, Jan, “On the construction of huffman trees”. in Proc. ICALP, 1976, pp. 382–410.
[21].Denton, Emily, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. (2014). Exploiting linearstructure within convolutional networks for efficient evaluation[Online].
Avaliable:https://arxiv.org/pdf/1404.0736.pdf