|
中文文獻 [1]. 沈志斌, 白清源. 文本分类中特征权重算法的改进. 南京师范大学学报, 2008, 8(4): 95-98. [2]. 鲁松, 李晓黎, 白硕.文档中词语权重计算方法的改进.中文信息学报.2000, 14(6): 8-13. [3]. 张保富, 施化吉, 马素琴.基于 TF-IDF 文本特征加权方法的改进研究.计算机应用与软件,2011, 28(2): 17-20. [4]. 李平, 戴月明, 王艳.基于混合卡方统计量与逻辑回归的文本情感分析,2017: 13-20. [5]. 李新福, 赵蕾蕾, 何海斌, 李芳.使用Logistic回归模型进行中文文本分类,2009: 11-17. [6]. 庞剑锋, 卜东波, 白硕. 基于向量空间模型的文本自动分类方法的研究与实现. 计算机应用研究 ,2001 :5-6. [7]. 刘勇, 兴艳云. 基于改进随机森林算法的文本分类研究与应用. 计算机系统应用, 2019, 28(5): 220-225. [8]. 唐明, 朱磊, 邹显春. 基于Word2Vec的一种文档向量表示. 计算机科学, 2016, 43(6): 214-217, 269. DOI:10.11896/j.issn.1002-137X.2016.06.043 英文文獻 [1]. Salton, G., Yu, T. On the construction of effective vocabularies for information retrieval[J]. ACM Sigplan Notices, 1975, 9(3): 48-60. [2]. Salton, G. Extended boolean information retrieval[J]. Cornell University, 1983, 11(4): 95-98. . [3]. Lin, J.Using distributional similarity to identify individual verb choice. 2006:33-40. [4]. Lan, M., Tan, C., Low, H. A comprehensive comparative study on term weighting schemes for text categorization with support vector machines[C]//Special Interest Tracks and Posters of the 14th International Conference on World Wide Web. : ACM, 2005: 1032-1033.
[5]. Vapnik, V. The Nature of Statistical Learning Theory Springer,1995:3-4. [6]. Esko, U. Online construction of suffix tree[M]. Algorithmica, 1995 :249-360. [7]. Osmar, R., Maria-Luiza, A. Classifying text documents by associating terms with text categories, Australian Computer Science Communications, v. 24 n. 2 , January2February, 2002 :215-222. [8]. William, C. Learning Rules that Classify E2Mail (Postscript) . In: The 1996 AAAI Spring Symposiumon Ma2 chine Learning in Information Access (1996). [9]. Data mining; concepts and techniques, 3d ed. (2011). (Vol. 26): Ringgold, Inc. [10]. Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984) Classification and Regression Trees. Chapman and Hall, Wadsworth, New York. [11]. Zheng, XQ., Chen, HY., Xu, TY. Deep learning for Chinese word segmentation and POS tagging. Proceedings of 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, WA, USA. 2013. 647-657. [12]. Socher, R., Bauer, J., Manning, CD., Andrew, Y. Parsing with compositional vector grammars. Proceedings of the 51st Meeting of the Association for Computational Linguistics. Sofia, Bulgaria. 2013. 455-465. [13]. Lilleberg, J., Zhu, Y., Zhang, YQ. Support vector machines and Word2vec for text classification with semantic features. Proceedings of the IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing. Beijing, China. 2015. 136-140. [14]. Mikolov, T., Sutskever, I., Chen, K., et al. Distributed representations of words and phrases and their compositionality. Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, NV, USA. 2013. 3111-3119. [15]. Turian, J., Ratinov, L., Bengio, Y. Word representations: A simple and general method for semi-supervised learning. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden. 2010. 384–394. [16]. Gershmans, S., Joshua, B. phrase similarity in humans and machines, 2015:776-781 [17]. Cho, K., Van Merriënboer, B., Gulcehre, C., et al. Learning phrase representations using rnn encoder-decoder for statistical machine translation[C]//Proceedings of EMNLP Processing. Doha, Qatar:ACL Press, 2014:1724-1734. [18]. Graves, A., Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks[C]//Proceedings of ICML. Bejing, China:[s.n.], 2014:1764-1772. [19]. Hochreiter, S., Schmidhuber, J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. DOI:10.1162/neco.1997.9.8.1735. [20]. Ma, X., Hovy, E. End-to-end sequence labeling via bi-directional lstm-cnns-crf[C]//Proceedings of ACL. Berlin, Germany:ACL Press, 2016:1064-1074. [21]. Tand, D., Qin, B., Liu, T. Document modeling with gated recurrent neural network for sentiment classification[C]//Proceedings of EMNLP. Lisbon, Portugal:ACL Press, 2015:1422-1432. [22]. Ashish, V., Noam, S., Niki, P., Jakob, U., et al. Attention Is All You Need, 2017:9-11. [23]. Matthew, E., Mark, N., Mohit, I., Matt, G., et al. Deep contextualized word representations, 2018:8-11. [24]. Alec, R., Karthik, N., Tim, S., Ilya, S. Improving Language Understanding by Generative Pre-Training, 2018:8-10. [25]. Liu, X., He, P., Chen, W., Gao, J. Multi-Task Deep Neural Networks for Natural Language Understanding, 2019:5-7. [26]. Jacob, D., Chang, MW., Lee, K., Kristina, T. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018:11-15.
|