|
參考文獻 1.Kaddour, J., et al. Challenges and Applications of Large Language Models. 2023. arXiv:2307.10169 DOI: 10.48550/arXiv.2307.10169. 2.Vaswani, A., et al. Attention Is All You Need. 2017. arXiv:1706.03762 DOI: 10.48550/arXiv.1706.03762. 3.Devlin, J., et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. 2018. arXiv:1810.04805 DOI: 10.48550/arXiv.1810.04805. 4.Yang, Z., et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding. 2019. arXiv:1906.08237 DOI: 10.48550/arXiv.1906.08237. 5.Brown, T.B., et al. Language Models are Few-Shot Learners. 2020. arXiv:2005.14165 DOI: 10.48550/arXiv.2005.14165. 6.Minaee, S., et al. Large Language Models: A Survey. 2024. arXiv:2402.06196 DOI: 10.48550/arXiv.2402.06196. 7.Huang, L., et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. 2023. arXiv:2311.05232 DOI: 10.48550/arXiv.2311.05232. 8.Tian, S., et al., Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Briefings in Bioinformatics, 2024. 25(1). 9.White, J., et al. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. 2023. arXiv:2302.11382 DOI: 10.48550/arXiv.2302.11382. 10.Amatriain, X. Prompt Design and Engineering: Introduction and Advanced Methods. 2024. arXiv:2401.14423 DOI: 10.48550/arXiv.2401.14423. 11.Wei, J., et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. 2022. arXiv:2201.11903 DOI: 10.48550/arXiv.2201.11903. 12.Lewis, P., et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. 2020. arXiv:2005.11401 DOI: 10.48550/arXiv.2005.11401. 13.Xu, L., et al., Nanjing Yunjin intelligent question-answering system based on knowledge graphs and retrieval augmented generation technology. Heritage Science, 2024. 12(1): p. 118. 14.Salemi, A. and H. Zamani Evaluating Retrieval Quality in Retrieval-Augmented Generation. 2024. arXiv:2404.13781 DOI: 10.48550/arXiv.2404.13781. 15.Chen, J., et al. Benchmarking Large Language Models in Retrieval-Augmented Generation. 2023. arXiv:2309.01431 DOI: 10.48550/arXiv.2309.01431. 16.Es, S., et al. RAGAS: Automated Evaluation of Retrieval Augmented Generation. 2023. arXiv:2309.15217 DOI: 10.48550/arXiv.2309.15217. 17.Yu, H., et al. Evaluation of Retrieval-Augmented Generation: A Survey. 2024. arXiv:2405.07437 DOI: 10.48550/arXiv.2405.07437. 18.Saad-Falcon, J., et al. ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems. 2023. arXiv:2311.09476 DOI: 10.48550/arXiv.2311.09476. 19.Hoshi, Y., et al. RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models. 2023. arXiv:2308.10633 DOI: 10.48550/arXiv.2308.10633. 20.Pandya, K. and M. Holia Automating Customer Service using LangChain: Building custom open-source GPT Chatbot for organizations. 2023. arXiv:2310.05421 DOI: 10.48550/arXiv.2310.05421. 21.Reimers, N. and I. Gurevych Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. 2019. arXiv:1908.10084 DOI: 10.48550/arXiv.1908.10084.
|