|
1.Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135-146. 2.Conneau, A., Schwenk, H., Barrault, L., & Lecun, Y. (2016). Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781. 3.Felbo, B., Mislove, A., Søgaard, A., Rahwan, I., & Lehmann, S. (2017). Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524. 4.Go, A., Bhayani, R., & Huang, L. (2009). Twitter sentiment classification using distant supervision, CS224N Project Report, Stanford, 1(12). 5.Grave, E., Bojanowski, P., Gupta, P., Joulin, A., & Mikolov, T. (2018). Learning word vectors for 157 languages. arXiv preprint arXiv:1802.06893. 6.Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. 7.Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jégou, H., & Mikolov, T. (2016). Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. 8.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097-1105). 9.Kim, Y. (2014). Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. 10.Le, Q., & Mikolov, T. (2014). Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14) (pp. 1188-1196). 11.LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. 12.Li, Y., Wang, X., & Xu, P. (2018). Chinese text classification model based on deep learning. Future Internet, 10(11), 113. 13.Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 14.Mishkin, D., & Matas, J. (2015). “All you need is a good init,” arXiv preprint arXiv:1511.06422. 15.Majumder, N., Poria, S., Peng, H., Chhaya, N., Cambria, E., & Gelbukh, A. (2019). Sentiment and Sarcasm Classification with Multitask Learning. arXiv preprint arXiv:1901.08014. 16.Nigam, K., McCallum, A. K., Thrun, S., & Mitchell, T. (2000). “Text classification from labeled and unlabeled documents using EM,” Machine Learning, 39(2-3), pp. 103-134. 17.Ozsoy, M. G. (2016). From word embeddings to item recommendation. arXiv preprint arXiv:1601.01356. 18.Pang, B., Lee, L., & Vaithyanathan, S. (2002, July). “Thumbs up?: sentiment classification using machine learning techniques,” In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing-Volume 10 (pp. 79-86). Association for Computational Linguistics. 19.Pan, S. J., & Yang, Q. (2009). “A Survey on Transfer Learning,” IEEE Transactions on Knowledge and Data Engineering, Vol. 22, pp. 1345-1359. 20.Reis, J. C., Correia, A., Murai, F., Veloso, A., Benevenuto, F., & Cambria, E. (2019). Supervised Learning for Fake News Detection. IEEE Intelligent Systems, 34(2), 76-81. 21.Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. 22.Xie, Q., Dai, Z., Hovy, E., Luong, M. T., & Le, Q. V. (2019). Unsupervised data augmentation. arXiv preprint arXiv:1904.12848. 23.Zhang, L., Wang, S., & Liu, B. (2018). “Deep learning for sentiment analysis: A survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, e1253. 24.莊長融&黃承龍,2018,『深度學習之遷移學習模式應用於留言情緒分類』,國際資訊管理暨實務研討會。
|