Your AI Agents
Forget Everything

Every conversation starts from zero. Users repeat themselves. Context is lost. GraphMem gives your AI agents persistent, structured memory — sign up, get an API key, and your agents remember everything.

Not a vector database. A hosted knowledge graph with human review, provenance tracking, and nothing to deploy.

Start FreeSee How It Works

The Problem with AI Memory

Vector databases give you fuzzy similarity. GraphMem gives you facts.

Your AI forgets users between sessions
Every conversation builds on the last
RAG returns fuzzy, irrelevant chunks
Precise facts: "Alice works_at Acme"
Hallucinations silently become "memory"
Human review before anything is stored

60 Seconds to Persistent Memory

1

Get an API key

Sign up, generate a key from the dashboard. No credit card required. Takes 30 seconds.

2

Connect your AI

Paste the MCP config into Claude/Cursor, or install the SDK (npm or pip) for custom agents. One line of config.

3

Your AI remembers

Facts are extracted from conversations, structured into a knowledge graph, and recalled on future queries. Automatically.

Two Lines to Connect

MCP for Claude & Cursor. TypeScript & Python SDKs for custom agents. Same API underneath.

MCPClaude Desktop, Cursor, Windsurf
{
  "mcpServers": {
    "graphmem": {
      "url": "https://graphmem.com/api/mcp",
      "headers": {
        "X-API-KEY": "gm_your_key_here"
      }
    }
  }
}

Your AI gets remember, search, and get_context tools. It learns from every conversation.

TypeScriptnpm install anura-graph
import { GraphMem } from 'anura-graph';

const mem = new GraphMem({
  apiKey: 'gm_your_key_here',
});

// Your agent learns
await mem.remember("Alice is VP of Eng at Acme");

// Your agent recalls
const ctx = await mem.getContext("Alice");
// => alice --works_at--> acme, alice --has_role--> vp of eng
Pythonpip install anura-graph
from graphmem import GraphMem

mem = GraphMem(api_key="gm_your_key_here")

# Your agent learns
mem.remember("Alice is VP of Eng at Acme")

# Your agent recalls
ctx = mem.get_context("Alice")
# => alice --works_at--> acme, alice --has_role--> vp of eng

Not Another Vector Database

GraphMem is a memory layer, not a storage engine. You get the full pipeline out of the box.

Text In, Knowledge Out

Call remember() with raw text. An LLM extracts structured facts, deduplicates entities, and builds the graph. You write zero extraction code.

Precise Retrieval

Graph traversal returns exact relationships, not "top-k similar chunks." Your AI knows Alice works at Acme — not that some paragraph mentions both names.

Human-in-the-Loop

Extracted facts land in a pending queue. You approve what enters the graph. No hallucination drift, no garbage accumulation.

Full Provenance

Every fact tracks its source — which tool, which conversation, which URI. Audit any claim back to where it came from.

GraphRAG Built In

Community detection, LLM summaries, hybrid search (graph + vector + communities). Three retrieval lanes, one API call.

Zero Infrastructure

We host the database, the API, and the graph engine. You get an API key and start building. No servers, no Docker, no ops.

How It Compares

GraphMemVector DBBuild It Yourself
LLM extractionyou build
Entity deduplicationyou build
Human review queueyou build
Provenance trackingyou build
Graph + vector + community searchvector onlyyou build
MCP serveryou build
TypeScript + Python SDKyou build
Setup time60 secondsminutesweeks
Infrastructurehosted for youmanaged servicevaries

Simple Pricing

Start free. Upgrade when your agents need more.

Freeforever
  • 100 facts, 2 projects
  • Full MCP + REST + SDK access
  • Human review queue
  • Dashboard & 3D Brain viewer
  • Community detection
Get Started
POPULAR
Pro/ month
  • 5,000 facts, 10 projects
  • 5 team members
  • Hybrid search (graph + vector + communities)
  • Higher rate limits
  • LLM community summaries
Upgrade to Pro
Max/ month
  • Unlimited facts & projects
  • Unlimited team members
  • Highest rate limits
  • Priority support
  • All Pro features included
Upgrade to Max