|
[1]人口高齡化,社會福利大挑戰, http://www.taiwanngo.tw/files/13-1000-9962-1.php?Lang=zh-tw [2]Paro, http://www.parorobots.com/, 2014. [3]Kuri, https://www.heykuri.com/, 2017. [4]Zenbo, https://zenbo.asus.com/, Mar 2017. [5]萬小芳, https://deepq.com/article/WFHLineBot, 2016. [6]L. Deng and D. Yu, “Deep Learning: Methods and Applications,” Foundations and Trends in Signal Processing, vol. 7, no. 3-4, pp. 197-387, Jun 2013. [7]UFLDL Tutorial, http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/ [8]Y. Bengio, A. Courville and P. Vincent, “Representation Learning: A Review and New Perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 1798-1828, 2013. [9]Y. Lecun; L. Bottou; Y. Bengio; P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no.11, pp. 2278-2324, 1998. [10]Szegedy, Christian, et al., “Going deeper with convolutions,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. [11]Mikolov, Tomáš, et al., “Recurrent neural network based language model,” Proceedings of the Eleventh Annual Conference of the International Speech Communication Association, 2010. [12]Lipton, Zachary C., John Berkowitz, and Charles Elkan, "A critical review of recurrent neural networks for sequence learning," arXiv preprint arXiv:1506.00019, 2015. [13]Hochreiter, Sepp, and Jürgen Schmidhuber, “Long short-term memory,” Neural computation, pp. 1735-1780, 1997. [14]The Unreasonable Effectiveness of Recurrent Neural Networks, http://karpathy.github.io/2015/05/21/rnn-effectiveness/, May 2015. [15]Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le, “Sequence to sequence learning with neural networks,” Advances in neural information processing systems, pp. 3104-3112, 2014. [16]Cho, Kyunghyun, et al., “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014. [17]語音命令與合成, http://www.penpower.com.tw/technology-voicecommand.asp, 2017. [18]Zhang, Bin, Changqin Quan, and Fuji Ren, “Study on CNN in the recognition of emotion in audio and images,” Proceedings of the IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pp. 1-5, 2016. [19]Badshah, Abdul Malik, et al., “Speech Emotion Recognition from Spectrograms with Deep Convolutional Neural Network,” Proceedings of the IEEE International Conference on Platform Technology and Service (PlatCon), pp. 1-5, 2017. [20]聲音的三要素, http://www.phy.ntnu.edu.tw/demolab/html.php?html=modules/sound/section2, Jun 2018. [21]全音, https://zh.wikipedia.org/wiki/全音, Oct 2016. [22]吉他補給, https://www.guitar.com.tw/basic-music-theory/, 2011. [23]十二平均律, https://zh.wikipedia.org/wiki/十二平均律, Nov 2017. [24]阿寶的音樂交流&吉他教學, http://maxaindyrdx.pixnet.net/blog/post/32646208, Jun 2012. [25]Turian, Joseph, Lev Ratinov, and Yoshua Bengio, “Word representations: a simple and general method for semi-supervised learning,” Proceedings of the 48th annual meeting of the association for computational linguistics, pp. 384-394, July 2010. [26]Mikolov, Tomas, et al., “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013. [27]CLOUD SPEECH-TO-TEXT, https://cloud.google.com/speech-to-text/, 2018. [28]Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391. [29]短時距傅立葉變換, https://zh.wikipedia.org/wiki/短時距傅立葉變換, Dec. 2017. [30]結巴中文斷詞台灣繁體版本, https://github.com/ldkrsi/jieba-zh_TW, Jul. 2016. [31]Pre-trained word vectors, https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md, May 2017. [32]Deep Q&A, https://github.com/Conchylicultor/DeepQA, 2017. [33]AndroidAudioRecorder, https://github.com/adrielcafe/AndroidAudioRecorder, Apr 2017. [34]Flask, http://flask.pocoo.org/, 2018.
|