[1] 闫悦, 郭晓然, 王铁君, 等. 问答系统研究综述[J]. 计算机系统应用, 2023, 32(08): 1-18. [2] 姚元杰, 龚毅光, 刘佳, 徐闯, 朱栋梁. 基于深度学习的智能问答系统综述. 计算机系统应用, 2023, 32(04): 1-15. [3] 田云龙, 王统帅, 牛丽. 智能家居领域利用AIGC家电垂直大模型提升洗衣机智能交互体验的系统和方法[J]. 家电科技, 2023 (zk): 126-130. [4] Yunfan Gao, Yun Xiong, Xinyu Gao, et al.Retrieval-Augmented Generation for Large Language Models: A Survey[J]. arXiv preprint arXiv: 2312. 10997. (2003). [5] Penghao Zhao, Hailin Zhang, Qinhan Yu, et al. Retrieval-Augmented Generation for AI-Generated Content: A Survey. ArXiv abs/2402. 19473 (2024): n. pag. [6] Qingxiu Dong, Lei Li, Damai Dai, et al. A Survey on In-context Learning[J]. arXiv:2301. 00234v6 . [7] Vaswani, Ashish, Noam Shazeer, Niki Parmar, et al. Attention Is All You Need[J]. *arXiv preprint arXiv:1706.03762*(2017). [8] Yunpeng Huang, Jingwei Xu, Junyu Lai, et al.2024. Advancing Transformer Architecture in Long-Context Large Language Models[J]. A Comprehensive Survey. 1, 1 (February 2024), 40 pages. [9] Yusen Zhang, Ruoxi Sun, Yanfei Chen, et al.Chain of Agents: Large Language Models Collaborating on Long-Context Tasks[J]. CoRR abs/2406.02818. 2024. [10] Nelson F. Liu, Kevin Lin, John Hewitt, et al.2024. Lost in the Middle: How Language Models Use Long Contexts[J]. Transactions of the Association for Computational Linguistics, 12: 157-173. [11] An Yang, Baosong Yang, Binyuan Hui, et al. Qwen2 Technical Report. ArXiv, abs/2407.10671. (2024). [12] Edward J Hu, Yelong Shen, Phillip Wallis, et al.LoRA: Low-Rank Adaptation of Large Language Models[C]. International Conference on Learning Representations, 2022. [13] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, et al.DoRA: Weight-Decomposed Low-Rank Adaptation[J]. arXiv preprint arXiv: 2402.09353. (2024). [14] Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, et al.Gemma 2: Improving Open Language Models at a Practical Size[J]. arXiv preprint arXiv: 2408.00118. (2024) [15] Xavier Amatriain. Prompt Design and Engineering: Introduction and Advanced Methods[J]. ArXiv, abs/2401.14423. (2024). |