soul.py: Your AI Remembers Nothing. This Fixes It in 10 Lines.
Every AI conversation starts the same way: “Hi, I’m Claude/GPT/Llama, how can I help you today?”
You’ve talked to this model a hundred times. You’ve told it your name, your projects, your preferences. It doesn’t matter. The moment the session ends, it forgets everything. Tomorrow, you start from zero.
This is the most basic failure mode in AI agents, and somehow we’ve normalized it.
The 10-Line Fix
from soul import Agent
agent = Agent()
agent.ask("My name is Prahlad and I'm building an AI research lab.")
# → "That's exciting — what are you working on first?"
# Later. New process. New session. Memory persists.
agent = Agent()
agent.ask("What do you know about me?")
# → "You're Prahlad, building an AI research lab."
That’s soul.py. Memory survives across processes—no database, no server, nothing running in the background.
How It Actually Works
soul.py uses two markdown files as the agent’s persistent state:
| File | Purpose |
|---|---|
SOUL.md | Identity — who the agent is, how it behaves |
MEMORY.md | Memory — timestamped log of past exchanges |
Every agent.ask() call:
- Reads
SOUL.md+MEMORY.mdinto the system prompt - Calls the LLM
- Appends the exchange to
MEMORY.mdwith a timestamp
That’s the entire architecture. 150 lines of Python.
What MEMORY.md Looks Like
After a few conversations:
# MEMORY.md
## 2026-03-01 08:00
Q: My name is Prahlad and I'm building an AI research lab.
A: That's exciting — what are you working on first?
## 2026-03-01 09:15
Q: What should I focus on today?
A: Based on your AI lab work, you mentioned the memory paper
was the priority...
Human-readable. Version-controllable. Editable by hand. git diff your agent’s memories if you want.
The Setup
pip install soul-agent
soul init
The wizard asks two questions:
- What’s your agent’s name?
- Which provider? (anthropic / openai / openai-compatible)
Creates SOUL.md and MEMORY.md in your current directory. You’re done.
Works With Everything
# Anthropic (default)
agent = Agent(provider="anthropic")
# OpenAI
agent = Agent(provider="openai")
# Local Ollama — no API key needed
agent = Agent(
provider="openai-compatible",
base_url="http://localhost:11434/v1",
model="llama3.2",
api_key="ollama"
)
Why Not LangChain / MemGPT / Clawdbot?
Those are frameworks. soul.py is a primitive.
- LangChain — orchestration layer, requires significant setup
- LlamaIndex — document indexing, needs vector store infrastructure
- MemGPT — impressive but opinionated about the full agent stack
- Clawdbot / OpenClaw — full agent runtime with tools, channels, scheduling, approval gates
The last category is worth expanding on. Tools like Clawdbot give you a complete agent infrastructure: Telegram/Discord/Slack integration, browser automation, cron jobs, exec sandboxing, the works. If you’re building a production agent that needs to do things in the world, that’s the right choice.
But what if you just want your Python script to remember who it’s talking to?
soul.py is the answer when:
- You’re building something custom and don’t want a framework
- You want memory without buying into an entire agent architecture
- You need to drop persistent identity into an existing codebase
- You want files you can read, edit, and
git diff
It’s the difference between “I need a car” and “I need wheels.” Sometimes you just need wheels.
What v0.1 Doesn’t Do (Yet)
Once MEMORY.md gets very large (thousands of entries), it’ll overflow the context window. That’s the v1.0 problem—solved with local RAG (ChromaDB/FAISS).
For most use cases, v0.1 runs indefinitely. A typical daily exchange is ~200 words. You’d hit the context limit after roughly 6 months of daily use. Plenty of runway.
The roadmap:
- v0.1 (now): Markdown-native, zero infrastructure
- v1.0: Local vector store for large memory files
- v2.0: RAG + RLM hybrid with query routing
The Philosophy
The best infrastructure is no infrastructure.
Vector databases are powerful. They’re also another service to run, another thing to break, another dependency to manage. For most agent use cases—personal assistants, research companions, project copilots—you don’t need them. You need a text file that persists.
soul.py starts there. When you outgrow it, the upgrade path exists. But most people won’t need it for months.
Try It Now — No Install Required
Live demo: soul.themenonlab.com
Chat with a soul.py agent and watch MEMORY.md fill up in real time. Ask it something, then try “What do you know about me so far?” — you’ll see exactly how the memory injection works under the hood.
No API key needed. No signup. Just try it.
(Demo source is also open: soul.py-demo — ~150 lines of FastAPI if you want to self-host)
Get Started Locally
pip install soul-agent
soul init
Star the repo: github.com/menonpg/soul.py
Your AI shouldn’t have amnesia. Fix it in 10 lines.