soul-agent-validator: pip install Your AI Agent Governance Pipeline
Every AI agent that ships to production today is a black box. You have no idea if itβs hardcoding API keys, calling unauthorized external services, skipping PII redaction, or missing the governance documentation that makes audits survivable. And with Google A2A becoming the standard for agent interoperability, thereβs a whole new category of compliance gaps β agents that claim A2A compatibility but donβt actually serve a valid agent card.
soul-agent-validator is the pre-deployment governance pipeline for AI agents. One command, 33 rules, structured report card.
pip install soul-agent-validator
# Validate from the CLI
soul-agent-validator validate https://github.com/your-org/your-agent
# Or start the web UI
soul-agent-validator serve # β http://localhost:8080
What It Checks
Rules are organized into four tiers. HARD failures reject the agent outright. SOFT failures warn but allow deployment. QUALITY checks are advisory.
HARD Gates β Reject on Failure
- SEC-001 β Hardcoded secrets: scans for API keys, tokens, AWS/GCP/GitHub credentials embedded in source
- SEC-002 β Banned imports: flags
subprocess,os.system,evalwith external input - SEC-003 β SSRF risk: detects unvalidated URL construction that could allow server-side request forgery
- A2A-001 β Agent card: checks for
.well-known/agent.jsonwith required fields - A2A-002 β A2A endpoint: verifies
tasks/sendJSON-RPC 2.0 endpoint exists
SOFT Gates β Warn
- Rate limiting on API endpoints
- PII redaction before logging
- Error handling and graceful degradation
- Input validation on tool calls
- Data residency controls
QUALITY β Advisory
- README presence and minimum length
- CHANGELOG or version history
- SOUL.md for agent identity (soul.py integration)
- Test coverage indicators
- Dependency version pinning
The soul.py Connection
soul-agent-validator is powered by soul.py (arXiv:2604.09588). It loads a governance auditor persona from SOUL.md on startup β so the validator doesnβt just run mechanical checks, it maintains a consistent set of values and reasoning patterns across all validations.
This is the same architecture described in Persistent Identity in AI Agents: the validator itself is an AI agent with persistent identity, validating whether other AI agents meet the bar for deployment.
One of the QUALITY checks specifically looks for SOUL.md in the repo being validated. An agent that ships with a defined identity and memory architecture β that has thought through who it is and how it remembers β is a more trustworthy agent than one that doesnβt.
Google A2A Compliance Built In
As Googleβs A2A protocol gains adoption, validators need to check for A2A compliance in the repos they scan and be A2A-compatible themselves. soul-agent-validator does both.
It checks:
GET /.well-known/agent.json β required fields: name, description, skills, capabilities
POST /a2a β JSON-RPC 2.0, tasks/send method
It serves:
curl https://your-validator/.well-known/agent.json
# β agent card with skills: [validate-agent]
curl -X POST https://your-validator/a2a \
-d '{"jsonrpc":"2.0","method":"tasks/send","params":{"message":{"parts":[{"text":"validate https://github.com/org/repo"}]}}}'
Orchestrators using LangGraph, CrewAI, or Google ADK can call soul-agent-validator as a tool β pass a GitHub URL, get back a report card with pass/warn/fail breakdown.
Custom Rules in Markdown
The entire rule engine is driven by Markdown files. No DSL, no YAML schema, no proprietary format. Each rule looks like this:
## RULE: CUSTOM-001 β No Direct DB Writes
**Tier:** HARD (fail = reject)
**Check type:** regex_scan
**Tags:** data-integrity
### Description
Agents must not write directly to the database.
### Parameters
```yaml
patterns:
- "cursor\\.execute\\s*\\(.*INSERT"
file_glob: "**/*.py"
Failure Message
β Direct DB write detected. Use the data access layer.
Drop it in `rules/custom.md`, restart, and the rule is live. No code changes.
## CI/CD Integration
```yaml
# .github/workflows/agent-check.yml
- name: Validate agent before deploy
run: |
curl -X POST ${{ vars.VALIDATOR_URL }}/validate \
-H "Content-Type: application/json" \
-d '{"repo_url": "${{ github.server_url }}/${{ github.repository }}", "submitter": "${{ github.actor }}"}' \
| tee result.json
# Fail if any HARD gate failed
python3 -c "
import json, sys
r = json.load(open('result.json'))
hard_fails = [f for f in r.get('report', {}).get('failures', []) if f.get('tier') == 'HARD']
if hard_fails:
print('β Hard gate failures:', [f['rule_id'] for f in hard_fails])
sys.exit(1)
print('β
All hard gates passed')
"
Deploy It Yourself
Docker:
docker build -t soul-agent-validator .
docker run -p 8080:8080 soul-agent-validator
Railway / Cloud Run / Fly.io: A Dockerfile is included. Point your platform at the repo and it deploys in minutes.
Links
- PyPI: pypi.org/project/soul-agent-validator
- GitHub: github.com/menonpg/agent-validator-oss
- soul.py paper: arXiv:2604.09588
- Related post: soul.py + MolTrust: Memory Meets Cryptographic Identity
Community
soul-agent-validator β I built a pre-deployment governance pipeline for AI agents (33 rules, A2A compatible, soul.py powered)
by u/the_ai_scientist in r/ollama