|
[1]Vaswani, Ashish, et al., "Attention is all you need." Advances in neural information processing systems 30, 2017. [2]Devlin, Jacob, et al., "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805, 2018. [3]Brown, Tom, et al., "Language models are few-shot learners." Advances in neural information processing systems 33, 2020, pp. 1877-1901. [4]Ouyang, Long, et al., "Training language models to follow instructions with human feedback." Advances in neural information processing systems 35, 2022, pp. 27730-27744. [5]Lewis, Patrick, et al., "Retrieval-augmented generation for knowledge-intensive nlp tasks." Advances in Neural Information Processing Systems 33, 2020, pp. 9459-9474. [6]S. Yao, J. Zhao, D. Yu et al., "React: Synergizing reasoning and acting in language models, " in ICLR, 2023. [7]Sun, Jiashuo, et al., "Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph." arXiv preprint arXiv:2307.07697, 2023. [8]T. Pouplin, H. Sun, S. Holt, and M. Van der Schaar, "Retrieval-augmented thought process as sequential decision making, " arXiv:2402.07812, 2024. [9]W. Wang, Y. Wang et al., "Rap-gen: Retrieval-augmented patch gener-ation with codet5 for automatic program repair, " in ESEC/FSE, 2023. [10]K. Sawarkar, A. Mangal et al., "Blended rag: Improving rag (retriever-augmented generation) accuracy with semantic search and hybrid query-based retrievers, " arXiv:2404.07220, 2024. [11]S. Lu, N. Duan, H. Han et al., "Reacc: A retrieval-augmented code completion framework, " in ACL, 2022. [12]W. Shi, S. Min, M. Yasunaga et al., "Replug: Retrieval-augmented black-box language models, " arXiv:2301.12652, 2023. [13]Zhang, Tianjun, et al., "Raft: Adapting language model to domain specific rag." arXiv preprint arXiv:2403.10131, 2024. [14]J. Li, Y. Li, G. Li et al., "Editsum: A retrieve-and-edit framework for source code summarization, " in ASE, 2021. [15]S. Robertson and H. Zaragoza., "The probabilistic relevance framework: BM25 and beyond, " Foundations and Trends in Information Retrieval, vol. 3, No. 4, pp. 333–389. 2009. [16]Mikolov, Tomas, et al., "Efficient estimation of word representations in vector space." arXiv preprint arXiv:1301.3781, 2013. [17]Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. "Glove: Global vectors for word representation." Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. [18]R. Nils, and I. Gurevych. "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks," Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3982-3992, Nov. 2019. [19]Karpukhin, Vladimir, et al., "Dense passage retrieval for open-domain question answering." 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020. Association for Computational Linguistics (ACL), 2020. [20]L. Wang, N. Yang, X. Huang, L. Yang, R. Majumder, and F. Wei, "Multilingual e5 text embeddings: A technical report, " arXiv preprint arXiv:2402.05672, Feb, 2024. [21]Yan, Shi-Qi, et al. "Corrective retrieval augmented generation." arXiv preprint arXiv:2401.15884, 2024. [22]Huang, Wenyu, et al., "Retrieval augmented generation with rich answer encoding." Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 2023. [23]Wang, Hongru, et al., "Unims-rag: A unified multi-source retrieval-augmented generation for personalized dialogue systems." arXiv preprint arXiv:2401.13256, 2024. [24]Siriwardhana, Shamane, et al., "Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering." arXiv preprint arXiv:2106.11517, 2021. [25]Wang, Xiaohua, et al., "Searching for Best Practices in Retrieval-Augmented Generation." arXiv preprint arXiv:2407.01219, 2024. [26]Asai, Akari, et al., "Self-rag: Learning to retrieve, generate, and critique through self-reflection." arXiv preprint arXiv:2310.11511, 2023. [27]Dodgson, Jennifer, et al., "Establishing performance baselines in fine-tuning, retrieval-augmented generation and soft-prompting for non-specialist llm users." arXiv preprint arXiv:2311.05903, 2023. [28]Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I, "Attention is All you Need," Neural Information Processing Systems, 2017.
|