soul.py Gives AI Agents Memory. MolTrust Gives Them Identity. Here's Why You Need Both.

By Prahlad Menon 7 min read

The distinction between internal identity and external identity in agentic AI is worth unpacking, because it reveals something important about where the field is still missing infrastructure.

This post builds on our paper Persistent Identity in AI Agents: A Multi-Anchor Architecture for Resilient Memory and Continuity (arXiv:2604.09588, cs.AI), which introduces soul.py and formalizes the concept of identity anchors for AI systems. If you want the full architecture, that’s where to start.

The Problem: AI Agents Have an Identity Crisis

Today’s AI agents are stateless and anonymous by default. Every session, they wake up fresh. They don’t know what they did yesterday. They can’t prove who they are to another agent. And other agents have no way to verify whether they’re trustworthy — or even real.

These are two separate problems, and most discussions conflate them.

Problem 1: Memory and continuity — How does an agent remember what it has done, learned, and decided? How does it maintain a consistent persona and behavior across sessions?

Problem 2: Verifiable identity — How does an agent prove to the outside world — to other agents, services, or humans — that it is who it says it is? How do you verify that a counterparty agent hasn’t been tampered with, compromised, or impersonated?

soul.py addresses the first. MolTrust addresses the second. Neither covers both.

What soul.py Does

soul.py is a Python library for persistent agent identity and memory. Install it with pip install soul-agent.

At its core, soul.py gives an agent a SOUL.md (identity, values, persona) and a MEMORY.md (curated long-term memory), injected at session start via a hybrid RAG + RLM (Reinforcement Learning Memory) architecture. The agent remembers decisions made in prior sessions, user preferences, past mistakes, and learned lessons — not because of any session state, but because all of it is written to persistent files and re-indexed into a vector store.

The three architectural layers:

  • RAG (Retrieval-Augmented Generation) — semantic search over prior memories, retrieving relevant context per query
  • RLM (Reinforcement Learning Memory) — recursive summarization of large memory sets, distilling hundreds of notes into structured insight
  • Flat-file injection — SOUL.md and MEMORY.md loaded directly into system prompt on startup for zero-latency access to core identity

The result: an agent that behaves consistently, accumulates knowledge, and maintains character across every session — without requiring a persistent server process.

What MolTrust Does

MolTrust is a trust infrastructure layer for the agent economy. It provides:

  • W3C DID:web — every agent gets a cryptographically verifiable decentralized identifier anchored on Base (Ethereum L2)
  • W3C Verifiable Credentials — Ed25519-signed credentials that agents carry as portable proof of trustworthiness, issuer identity, and capability claims
  • Reputation scoring — 0–100 trust scores built from on-chain behavior, verified interactions, and anti-Sybil analysis
  • Sybil detection — traces wallet funding sources via Blockscout, detects bot patterns and coordinated wallet clusters
  • Agent-native payments — USDC on Base and Bitcoin Lightning for micropayment-based trust service access

The key primitive is the agent card: a .well-known/agent.json file (Google A2A compatible) containing the agent’s DID, reputation score, and capability declarations — signed and verifiable by any third party.

When two agents meet, MolTrust answers: Can I trust this agent before we transact? It doesn’t care what the agent remembers or what it believes about itself. It cares about cryptographically provable external facts: who registered it, what its on-chain history looks like, whether it’s been flagged.

Why They’re Complementary

Lars’s framing is precise: soul.py is about internal identity continuity. MolTrust is about external cryptographic verifiability.

Here’s the failure mode each addresses:

FailureWithout soul.pyWithout MolTrust
Agent forgets past decisions✗ Every session starts blank
Agent behaves inconsistently✗ Persona drifts with context
Agent impersonated by another✗ No way to verify
Compromised agent acts maliciously✗ No reputation signal
Agent-to-agent trust at first contact✗ No credential exchange

A production agentic system that uses both would work like this: soul.py loads the agent’s memory and identity at boot, giving it continuity. MolTrust issues a VC and DID, giving it external verifiability. When it connects to another agent, it can prove its identity (MolTrust) and also has the context to behave consistently and remember the prior relationship (soul.py).

This is what human professionals do: they have both a consistent internal identity (their experience, values, memory) and external credentials (a passport, a license, a reputation). Agents need the same two-layer architecture.

The Research Foundation

This isn’t just engineering intuition. A December 2025 paper accepted at ICAART 2026 — “AI Agents with Decentralized Identifiers and Verifiable Credentials” (arXiv:2511.02841) — presents a prototypical multi-agent system where each agent is endowed with a self-sovereign DID and third-party-issued VCs. The authors’ key finding: while the technical architecture is feasible, limitations appear when the agent’s LLM is left in sole control of security procedures. That’s exactly why external infrastructure like MolTrust matters — the trust layer shouldn’t depend on the agent’s own judgment.

Where Google A2A Fits In

MolTrust’s agent card format is explicitly compatible with Google’s A2A (Agent-to-Agent) protocol — and this is where all three pieces connect.

Google A2A defines a standard way for agents to discover and negotiate with each other via a .well-known/agent.json file. That file advertises what the agent can do, what skills it has, and how to authenticate. But it says nothing about whether the agent is trustworthy — it’s just a capability declaration.

MolTrust extends the A2A agent card with trust extensions: a DID, a reputation score, and verifiable credentials that third parties have issued about the agent. So instead of just knowing what an agent claims to do, a counterparty can verify what it has proven to be.

soul.py then closes the loop on the agent’s own side: it loads the agent’s history, preferences, and identity on startup so the agent behaves consistently with its declared persona — not just what it claims in a card, but what it actually remembers about prior relationships, decisions, and context.

The three-layer stack:

LayerToolWhat it provides
Capability declarationGoogle A2A (.well-known/agent.json)What the agent can do
Cryptographic identity + reputationMolTrust (DID + VC)Who the agent is, verified externally
Memory + continuitysoul.py (SOUL.md + MEMORY.md)What the agent knows and remembers

A Working Example: Agent Validator

Agent Validator is an open-source tool that implements all three layers. It validates any GitHub agent repo against 33 compliance rules before deployment — and is itself:

  • A2A compatible — serves a .well-known/agent.json agent card and accepts tasks/send JSON-RPC calls from orchestrators
  • soul.py powered — loads SOUL.md and MEMORY.md on startup so the governance auditor persona stays consistent across validations
  • MolTrust-ready — the rule engine can be extended with a custom rule to verify that an agent carries a valid MolTrust VC before it’s allowed to deploy

It’s a small but complete demonstration of what the three-layer stack looks like in practice.

What This Looks Like in Practice

# soul.py — agent loads memory and identity
from soul_agent import SoulAgent

agent = SoulAgent(
    soul_path="SOUL.md",
    memory_path="MEMORY.md",
    qdrant_url=QDRANT_URL,
)

# Agent has continuity — it knows who it is and what it has done

# MolTrust — agent proves identity to counterparty
import requests

vc = requests.post("https://api.moltrust.ch/credentials/issue", json={
    "subject_did": "did:web:myagent.example.com",
    "claims": {"role": "procurement-agent", "authorized_by": "acme-corp"},
}).json()

# Counterparty can verify this credential independently — no trust in the agent itself required

The two libraries operate at different layers and don’t interfere. soul.py is internal infrastructure. MolTrust is external infrastructure. Both are Python, both are open, and both are built for the same underlying problem: making AI agents reliable enough to actually deploy in production.

Frequently Asked Questions

What is soul.py? soul.py is an open-source Python library (pip install soul-agent) that gives AI agents persistent memory and identity continuity across sessions. It uses RAG, RLM, and flat-file injection to load prior context at session start — so agents remember what they’ve learned, who they’ve talked to, and what decisions they’ve made.

What is MolTrust? MolTrust is a trust infrastructure layer for AI agents. It provides W3C DID:web identities, Verifiable Credentials, reputation scoring, and Sybil detection — so agents can prove their identity cryptographically and establish trust with counterparties before any transaction.

How are soul.py and MolTrust different? soul.py handles internal identity continuity — what the agent knows about itself and its history. MolTrust handles external verifiability — what the outside world can prove about the agent. They solve different problems and are designed to work together.

What are W3C Verifiable Credentials? W3C VCs are tamper-proof digital credentials signed with Ed25519 cryptography. They allow any issuer to attest to any claim about a subject (e.g., “this agent was authorized by ACME Corp on this date”) in a way that any verifier can check without trusting the issuer’s server. MolTrust issues VCs that carry agent trust scores, authorization claims, and identity attestations.

What is a DID? A DID (Decentralized Identifier) is a W3C standard for self-sovereign identity. Unlike a username, a DID is cryptographically bound to a keypair — so possession of the private key proves control of the identity. MolTrust anchors DIDs on Base (Ethereum L2) for immutable, publicly verifiable proof of existence.

Can soul.py and MolTrust be used independently? Yes. soul.py works with any AI agent that uses Python — no blockchain or external infrastructure required. MolTrust works with any agent regardless of how it manages internal state. You don’t need one to use the other. But used together, they address the full spectrum of identity problems in agentic AI.

Is MolTrust open source? MolTrust provides a Python SDK on PyPI (pip install moltrust) and an MCP server for AI assistants. The core API is free during early access with 100 credits on registration. Enterprise plans are available for volume and custom credential types.

What is the agent economy? The agent economy refers to the emerging landscape where AI agents transact, negotiate, and collaborate autonomously — purchasing services, executing contracts, and exchanging data without human involvement at each step. Trust infrastructure like MolTrust becomes critical when agents represent organizations in high-stakes automated workflows.