|
[1]J. Ramos, "Using tf-idf to determine word relevance in document queries," in Proceedings of the first instructional conference on machine learning, 2003, vol. 242, no. 1: Citeseer, pp. 29-48. [2]J. W. Pennebaker, M. E. Francis, and R. J. Booth, "Linguistic inquiry and word count: LIWC 2001," Mahway: Lawrence Erlbaum Associates, vol. 71, no. 2001, p. 2001, 2001. [3]T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013. [4]B. Mehra, "Chatbot personality preferences in Global South urban English speakers," Social Sciences & Humanities Open, vol. 3, no. 1, p. 100131, 2021. [5]L. Zhou, J. Gao, D. Li, and H.-Y. Shum, "The design and implementation of xiaoice, an empathetic social chatbot," Computational Linguistics, vol. 46, no. 1, pp. 53-93, 2020. [6]T. Lee, K. Park, J. Park, Y. Jeong, J. Chae, and H. Lim, "Korean Q&A Chatbot for COVID-19 News Domains Using Machine Reading Comprehension," in Annual Conference on Human and Language Technology, 2020: Human and Language Technology, pp. 540-542. [7]A. S. Lokman and M. A. Ameedeen, "Modern chatbot systems: A technical review," in Proceedings of the future technologies conference, 2018: Springer, pp. 1012-1023. [8]T. Zhao, X. Lu, and K. Lee, "Sparta: Efficient open-domain question answering via sparse transformer matching retrieval," arXiv preprint arXiv:2009.13013, 2020. [9]R. Nogueira, W. Yang, J. Lin, and K. Cho, "Document expansion by query prediction," arXiv preprint arXiv:1904.08375, 2019. [10]L. Xiong et al., "Approximate nearest neighbor negative contrastive learning for dense text retrieval," arXiv preprint arXiv:2007.00808, 2020. [11]K. C. Pramodh and Y. Vijayalata, "Automatic personality recognition of authors using big five factor model," in 2016 IEEE International Conference on Advances in Computer Applications (ICACA), 2016: IEEE, pp. 32-37. [12]H. Nguyen, D. Morales, and T. Chin, "A neural chatbot with personality," Published at the Semantic Scholar. [13]Y. Zheng, G. Chen, M. Huang, S. Liu, and X. Zhu, "Personalized dialogue generation with diversified traits," arXiv preprint arXiv:1901.09672, 2019. [14]C. Distinguishability, "A Theoretical Analysis of Normalized Discounted Cumulative Gain (NDCG) Ranking Measures," 2013. [15]E. Voorhees et al., "TREC-COVID: constructing a pandemic information retrieval test collection," in ACM SIGIR Forum, 2021, vol. 54, no. 1: ACM New York, NY, USA, pp. 1-12. [16]G. Tsatsaronis et al., "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition," BMC bioinformatics, vol. 16, no. 1, pp. 1-28, 2015. [17]V. Boteva, D. Gholipour, A. Sokolov, and S. Riezler, "A full-text learning to rank dataset for medical information retrieval," in European Conference on Information Retrieval, 2016: Springer, pp. 716-722. [18]T. Kwiatkowski et al., "Natural questions: a benchmark for question answering research," Transactions of the Association for Computational Linguistics, vol. 7, pp. 453-466, 2019. [19]Z. Yang et al., "HotpotQA: A dataset for diverse, explainable multi-hop question answering," arXiv preprint arXiv:1809.09600, 2018. [20]M. Maia et al., "Www'18 open challenge: financial opinion mining and question answering," in Companion Proceedings of the The Web Conference 2018, 2018, pp. 1941-1942. [21]A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017. [22]Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov, "Transformer-xl: Attentive language models beyond a fixed-length context," arXiv preprint arXiv:1901.02860, 2019. [23]W. Fedus, B. Zoph, and N. Shazeer, "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity," ed, 2021. [24]N. Shazeer et al., "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer," arXiv preprint arXiv:1701.06538, 2017. [25]M. Kim, T. Kim, and D. Kim, "Spatio-temporal slowfast self-attention network for action recognition," in 2020 IEEE International Conference on Image Processing (ICIP), 2020: IEEE, pp. 2206-2210. [26]A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020. [27]R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman, "Video action transformer network," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 244-253. [28]M.-H. Ha and O. T.-C. Chen, "Deep Neural Networks Using Residual Fast-Slow Refined Highway and Global Atomic Spatial Attention for Action Recognition and Detection," IEEE Access, vol. 9, pp. 164887-164902, 2021. [29]J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018. [30]Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, "Xlnet: Generalized autoregressive pretraining for language understanding," Advances in neural information processing systems, vol. 32, 2019. [31]S. Sabour, N. Frosst, and G. E. Hinton, "Dynamic routing between capsules," Advances in neural information processing systems, vol. 30, 2017. [32]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by back-propagating errors," nature, vol. 323, no. 6088, pp. 533-536, 1986. [33]P. Costa, R. R. McCrae, and N. Revised, "Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI): Professional Manual," Psychological Assessment Resources, Odessa, FL, 1992. [34]F. Celli, F. Pianesi, D. Stillwell, and M. Kosinski, "Workshop on computational personality recognition: Shared task," in Proceedings of the International AAAI Conference on Web and Social Media, 2013, vol. 7, no. 2, pp. 2-5. [35]N. Majumder, S. Poria, A. Gelbukh, and E. Cambria, "Deep learning-based document modeling for personality detection from text," IEEE Intelligent Systems, vol. 32, no. 2, pp. 74-79, 2017. [36]H. Jiang, X. Zhang, and J. D. Choi, "Automatic text-based personality recognition on monologues and multiparty dialogues using attentive networks and contextual embeddings (student abstract)," in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 10, pp. 13821-13822. [37]D. Ruta and B. Gabrys, "Classifier selection for majority voting," Information fusion, vol. 6, no. 1, pp. 63-81, 2005. [38]E. P. Tighe, J. C. Ureta, B. A. L. Pollo, C. K. Cheng, and R. de Dios Bulos, "Personality Trait Classification of Essays with the Application of Feature Reduction," in SAAIP@ IJCAI, 2016, pp. 22-28. [39]W. Yin, H. Schütze, B. Xiang, and B. Zhou, "Abcnn: Attention-based convolutional neural network for modeling sentence pairs," Transactions of the Association for Computational Linguistics, vol. 4, pp. 259-272, 2016. [40]P. Zhou et al., "Attention-based bidirectional long short-term memory networks for relation classification," in Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers), 2016, pp. 207-212. [41]X. Wang et al., "Heterogeneous graph attention network," in The world wide web conference, 2019, pp. 2022-2032. [42]Y. Liu et al., "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019. [43]Y. Wang, J. Zheng, Q. Li, C. Wang, H. Zhang, and J. Gong, "XLNet-caps: personality classification from textual posts," Electronics, vol. 10, no. 11, p. 1360, 2021. [44]A. Roshchina, J. Cardiff, and P. Rosso, "A comparative evaluation of personality estimation algorithms for the twin recommender system," in Proceedings of the 3rd international workshop on Search and mining user-generated contents, 2011, pp. 11-18. [45]J. A. Qadir, A. K. Al-Talabani, and H. A. Aziz, "Isolated Spoken Word Recognition Using One-Dimensional Convolutional Neural Network," International Journal of Fuzzy Logic and Intelligent Systems, vol. 20, no. 4, pp. 272-277, 2020. [46]X. Jiao et al., "Tinybert: Distilling bert for natural language understanding," arXiv preprint arXiv:1909.10351, 2019. [47]Y. Wang, A. Sun, J. Han, Y. Liu, and X. Zhu, "Sentiment analysis by capsules," in Proceedings of the 2018 world wide web conference, 2018, pp. 1165-1174. [48]X. Zhang, P. Wu, J. Cai, and K. Wang, "A contrastive study of Chinese text segmentation tools in marketing notification texts," in Journal of Physics: Conference Series, 2019, vol. 1302, no. 2: IOP Publishing, p. 022010. [49]https://pypi.org/project/ckip-transformers/ [50]C.-Y. Chen and W.-Y. Ma, "Word embedding evaluation datasets and wikipedia title embedding for Chinese," in Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018. [51]C.-Y. Chen and W.-Y. Ma, "Embedding wikipedia title based on its wikipedia text and categories," in 2017 International Conference on Asian Language Processing (IALP), 2017: IEEE, pp. 146-149. [52]https://dumps.wikimedia.org/zhwiki/ [53]https://term.ptt.cc/ [54]S. Temma, M. Sugii, and H. Matsuno, "The document similarity index based on the Jaccard distance for mail filtering," in 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), 2019: IEEE, pp. 1-4.
|