Every conversation starts from zero. Users repeat themselves. Context is lost. GraphMem gives your AI agents persistent, structured memory — sign up, get an API key, and your agents remember everything.
Not a vector database. A hosted knowledge graph with human review, provenance tracking, and nothing to deploy.
Vector databases give you fuzzy similarity. GraphMem gives you facts.
Sign up, generate a key from the dashboard. No credit card required. Takes 30 seconds.
Paste the MCP config into Claude/Cursor, or install the SDK (npm or pip) for custom agents. One line of config.
Facts are extracted from conversations, structured into a knowledge graph, and recalled on future queries. Automatically.
MCP for Claude & Cursor. TypeScript & Python SDKs for custom agents. Same API underneath.
{
"mcpServers": {
"graphmem": {
"url": "https://graphmem.com/api/mcp",
"headers": {
"X-API-KEY": "gm_your_key_here"
}
}
}
}Your AI gets remember, search, and get_context tools. It learns from every conversation.
import { GraphMem } from 'anura-graph';
const mem = new GraphMem({
apiKey: 'gm_your_key_here',
});
// Your agent learns
await mem.remember("Alice is VP of Eng at Acme");
// Your agent recalls
const ctx = await mem.getContext("Alice");
// => alice --works_at--> acme, alice --has_role--> vp of engfrom graphmem import GraphMem
mem = GraphMem(api_key="gm_your_key_here")
# Your agent learns
mem.remember("Alice is VP of Eng at Acme")
# Your agent recalls
ctx = mem.get_context("Alice")
# => alice --works_at--> acme, alice --has_role--> vp of engGraphMem is a memory layer, not a storage engine. You get the full pipeline out of the box.
Call remember() with raw text. An LLM extracts structured facts, deduplicates entities, and builds the graph. You write zero extraction code.
Graph traversal returns exact relationships, not "top-k similar chunks." Your AI knows Alice works at Acme — not that some paragraph mentions both names.
Extracted facts land in a pending queue. You approve what enters the graph. No hallucination drift, no garbage accumulation.
Every fact tracks its source — which tool, which conversation, which URI. Audit any claim back to where it came from.
Community detection, LLM summaries, hybrid search (graph + vector + communities). Three retrieval lanes, one API call.
We host the database, the API, and the graph engine. You get an API key and start building. No servers, no Docker, no ops.
| GraphMem | Vector DB | Build It Yourself | |
|---|---|---|---|
| LLM extraction | ✓ | ✗ | you build |
| Entity deduplication | ✓ | ✗ | you build |
| Human review queue | ✓ | ✗ | you build |
| Provenance tracking | ✓ | ✗ | you build |
| Graph + vector + community search | ✓ | vector only | you build |
| MCP server | ✓ | ✗ | you build |
| TypeScript + Python SDK | ✓ | ✗ | you build |
| Setup time | 60 seconds | minutes | weeks |
| Infrastructure | hosted for you | managed service | varies |
Start free. Upgrade when your agents need more.