DocBrain captures knowledge from PRs, Slack threads, CI pipelines, and your IDE the moment it's created — connects it into a living graph, quality-scores it, and publishes it automatically.
The problem isn't missing docs. It's that knowledge is created in one place and documented in another — if it's documented at all.
A senior engineer explains a critical architecture decision in a thread. Two months later, nobody can find it.
Every merged PR contains decisions, caveats, and procedures — but PR descriptions are write-once, read-never.
Code has linters, tests, and review workflows. Documentation has nothing. No quality scores, no style enforcement, no SLAs.
How It Works
Knowledge captured at the source, quality-scored automatically, published through configurable review workflows.
Knowledge fragments auto-extracted from merged PRs, Slack conversations, CI deployments, and IDE annotations.
Three-layer quality pipeline: structural analysis, team-defined style rules, and LLM-assessed semantic scoring.
Fragments cluster by semantic similarity. When enough accumulate, DocBrain auto-composes them into documentation.
Space ownership, SLA policies, multi-stage review workflows, and breach detection.
Other tools index existing docs. DocBrain captures the knowledge that was never written down and turns it into quality documentation.
Result: Documentation grounded in real engineering work, quality-scored, and governed — without asking anyone to write docs.
Access Everywhere
One knowledge layer, five interfaces. Ask from Slack during an incident, from your IDE while coding, or from the terminal at 2am.
git log --oneline production..mainkubectl rollout undo deployment/payment-processorcurl -sf https://api.example.com/health/payments4-Layer Memory System
Four memory layers work together so every question makes the system sharper. The 100th query about Kubernetes is faster and better-answered than the first.
Cache hit rate
Avg response
Binary (Rust)
Multi-turn conversation context. Resolves "it", "that service", "the same thing."
Every Q&A ever asked. Semantic caching, feedback learning. Gets cheaper as it learns.
Entity relationships via graph traversal. "Who owns the payment service?" — resolved, not searched.
Adapts retrieval from feedback. Discovers: "deployment questions → search DevOps first."
Features
First-class knowledge units with provenance. Auto-captured from PRs, conversations, and CI pipelines.
Merged PRs and deployments auto-analyzed by LLM. Decisions, caveats, procedures extracted as fragments.
Three-layer scoring: structural, style rules, and LLM-assessed semantic quality. Composite score 0-100.
Hybrid search, intent classification, cited answers with freshness. Low confidence asks questions instead of hallucinating.
Entity relationships with BFS/DFS traversal. Blast radius analysis, expertise routing.
Clusters unanswered questions, detects gaps, auto-drafts missing documentation.
Anthropic, OpenAI, Bedrock, Ollama (local), Gemini, Azure, and 8 more. Swap via config.
GitHub/GitLab/OIDC SSO. Four roles. Space-level isolation. API keys Argon2-hashed.
Mirrors real Confluence / Slack / GitHub / Jira permissions at query time. Restricted page in source = filtered out for users who can't read it. Three modes, side-channel-safe, audit-logged.
25MB binary. <500ms cold start. Under 100MB memory. Self-hosted, your data stays yours.
Plug in any knowledge source in any language. 3 endpoints, DocBrain handles the rest.
10 tools for Claude Code, Cursor, and any MCP editor. Capture decisions at commit time.
Confluence, Slack, Teams, GitHub, GitLab, Jira, PagerDuty, OpsGenie, Zendesk, and more.
Shift-Left In Practice
Knowledge captured where the work happens. Not after. Not in a doc sprint. At the source, in real-time.
9:15 AM — PR Merged
DocBrain's CI capture extracts 3 knowledge fragments: a decision about retry logic, a caveat about idempotency keys, and a procedure for manual refund overrides. All auto-indexed with 92% confidence.
11:30 AM — Slack Thread
DocBrain answers with fragments captured 2 hours ago — plus links to the original PR. The support engineer types /docbrain capture to save additional context.
2:00 PM — Auto-Composition
DocBrain detects semantic similarity across PR fragments, Slack capture, and older fragments. Composes "Payment Refund Procedures," scores at 78/100, routes for SME review.
3:30 PM — Published
Quality rules catch 2 violations, auto-fixed. Final score: 91/100. Published to "Payments" space. Total engineer effort: zero.
Traditional: schedule a doc sprint, assign writers, review in 2 weeks, stale in 2 months.
Shift-left: captured at 9:15 AM. Published by 3:30 PM. Stays accurate forever.
FAQ
bash scripts/setup.sh — PostgreSQL, OpenSearch, Redis, migrations, sample docs. Add your LLM API key and go.Docker, an API key, and three commands. That's it.