As LLM agents become more deeply embedded into applications, understanding memory is key to designing systems that are trustworthy, scalable, and user-friendly. Memory is more than a technical feature—it’s the foundation of how adaptive systems learn, retain, and respond. But what kind of memory do LLM agents really have, and how do their strengths and limits compare to our own?
In this session, we’ll explore agentic memory from first principles, with no prerequisites beyond curiosity. By grounding the discussion in everyday human experiences—like recalling a friend’s name (long-term memory), keeping a phone number in mind long enough to dial (working memory), or writing a note to yourself (external aid)—we’ll create an intuitive map of how memory translates into agentic systems.
We’ll look at how training weights act as fixed long-term memory, how context windows function as working memory, and how retrieval techniques (like RAG) serve as outside aids. From there, we’ll dive into why LLM agents excel at stable, long-standing knowledge but falter when handling fresh information, why conversation continuity can feel fragile, and which strategies can extend memory’s usefulness.
Finally, we’ll consider practical approaches to designing agentic memory—from user profiles and episodic traces to shared semantic lessons—helping anyone build AI that feels more contextual, reliable, and human-like.
Why memory is universal: if it adapts, it remembers
Mapping human memory to LLM agents: intuitive parallels from daily life
Stable vs. fresh knowledge: why facts age differently in agent systems
Conversation continuity: context limits and how to extend them
Memory types: profiles, episodic traces, and semantic sharing
Senior Data Engineer at Data Science Dojo