Chang Yang†, Chuang Zhou†, Yilin Xiao†, Su Dong, Luyao Zhuang, Yujing Zhang, Zhu WangZijin Hong, Zheng Yuan, Zhishang Xiang, Shengyuan Chen‡Ninghao Liu, Jinsong Su, Xinrun Wang, Yi Chang, Xiao Huang mathematical reasoning [2] to multi-agent tasks [3] and open-world exploration [4]. The inherent language understanding,generation, and inference capabilities of LLMs enable LLM-based agents to autonomously perceive environments and make Abstract—Memory emerges as the core module in the LargeLanguage Model (LLM)-based agents for long-horizon complextasks (e.g., multi-turn dialogue, game playing, scientific discovery),where memory can enable knowledge accumulation, iterativereasoning and self-evolution. Among diverse paradigms, graphstands out as a powerful structure for agent memory due to theintrinsic capabilities to model relational dependencies, organizehierarchical information, and support efficient retrieval. Thissurvey presents a comprehensive review of agent memory from thegraph-based perspective. First, we introduce a taxonomy of agentmemory, including short-term vs. long-term memory, knowledgevs. experience memory, non-structural vs. structural memory,with an implementation view of graph-based memory. Second,according to the life cycle of agent memory, we systematicallyanalyze the key techniques in graph-based agent memory, covering Despite notable advancements, LLM-based agents are stillconstrained by the intrinsic limitations of LLMs. (i) Knowledgecutoff: LLMs are trained on static datasets with fixed timeboundaries, resulting in knowledge cutoff issues that preventthem from incorporating real-time information (e.g., currentfinancial data) or domain-specific knowledge beyond their pre-training corpora. This limitation undermines their ability toadapt to dynamic environments and open-ended scenarios. (ii)Tool incompetence: Although tool use represents a core capabil-ity of LLM-based agents [6], [7], existing LLMs demonstratelimited capacity for efficiently learning and applying noveltools, which substantially constrains the agent performance. (iii) memory extraction for transforming the data into the contents, Index Terms—Agent, Multi-Agent System, Agent Memory,Knowledge Graph, Self-Evolving, Graph-based Memory To address these challenges, memory [8] has emerged asa critical component for advancing LLM agents towards four key objectives: i)Personalization and Specification.[9]:Memory enables agents to capture user preferences, interactionhistories, and task-specific contexts for tailored responses, suchas remembering workflow habits in software engineering orcommunication styles in conversational scenarios. MemoryarXiv:2602.05665v1 [cs.AI] 5 Feb 2026 I. INTRODUCTION The past few years witnessed the rapid development ofLarge Language Model (LLM)-based agents, which havedemonstrated remarkable success in complex, long-horizon bridges general knowledge with specific context, storing both universal facts and particular histories to ground responses inpersonalized, context-aware information [10]. ii)Long-termReasoning beyond Context Window.While LLMs operatewithin finite context windows with static parametric knowledge, †Equal contribution. Chen.Chang Yang, Chuang Zhou, Yilin Xiao, Su Dong, Luyao Zhuang, Yujing Zhang, Zhu Wang, Zijin Hong, Zheng Yuan, Shengyuan Chen, Huachi Zhou, Qinggang Zhang, Ninghao Liu, and Xiao Huang are with the The Hong Kong memory systems provide unbounded external storage thatenables continuous learning and adaptation. Memory allowsagents to retain information across extended temporal horizons, qqzj.zhou, yilin.xiao, su.dong, luyao.zhuang, yu-jing.zhang, juliazhu.wang,zijin.hong, yzheng.yuan, huachi.zhou}@connect.polyu.hk,{sheng-yuan.chen,qinggang.zhang, ninghao-prof.liu, xiao.huang}@polyu.edu.hk).Jinsong Su and Zhishang Xiang are with the School of Informa- tion,Xiamen University,China(e-mail:xiangzhishang@stu.xmu.edu.cn,jssu@xmu.edu.cn). Xinrun Wang is with the School of Computing and Information Systems,Singapore Management University, Singapore (e-mail: xrwang@smu.edu.sg).Yi Chang is with the School of Artificial Intelligence, Jilin University, implementation view of graph-based memory (Section III).•We systematically analyze the critical memory manage-ment techniques, covering memory extraction (Section IV),memory storage (Section V), memory retrieval (Sec-tion VI) and memory evolution (Section VII).•We summarize open-sourced libraries and benchmarks(Section VIII) that support the development and evaluation thetasks without parameter updating.iv)Hallucination mitigation[12]: Grounding outputs in structured, verifiablememory content reduces reliance on potentially unreliableparametric knowledge. In essence, memory transforms statelessreactive models into stateful adaptive entities capable of rela- Traditionalimplementations of agent memory primar-ily adopt linear, unstructured, or simple key-value storageparadigms,such as fixed-length token sequences,vec