Enhancing Chatbot Memory: Recursive Summarization in Large Language Models #802
Labels
ai-platform
model hosts and APIs
chat-templates
llm prompt templates for chat models
dataset
public datasets and embeddings
llm
Large Language Models
llm-experiments
experiments with large language models
New-Label
Choose this option if the existing labels are insufficient to describe the content accurately
Enhancing Chatbot Memory: Recursive Summarization in Large Language Models
Snippet
"Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models
Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo Recently, large language models (LLMs), such as GPT-4, stand out remarkable conversational abilities, enabling them to engage in dynamic and contextually relevant dialogues across a wide range of topics. However, given a long conversation, these chatbots fail to recall past information and tend to generate inconsistent responses. To address this, we propose to recursively generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability. Specifically, our method first stimulates LLMs to memorize small dialogue contexts and then recursively produce new memory using previous memory and following contexts. Finally, the chatbot can easily generate a highly consistent response with the help of the latest memory. We evaluate our method on both open and closed LLMs, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation. Also, we show that our strategy could nicely complement both long-context (e.g., 8K and 16K) and retrieval-enhanced LLMs, bringing further long-term dialogue performance. Notably, our method is a potential solution to enable the LLM to model the extremely long context. The code and scripts will be released later."
Read the full paper here
Suggested labels
{'label-name': 'long-term-memory', 'label-description': 'Enhancing dialogue memory in large language models for long-context conversations.', 'confidence': 57.43}
The text was updated successfully, but these errors were encountered: