MCP Integration

Let your AI assistant read findings, fix code, and verify fixes, without leaving your editor.

What Is MCP?

MCP (Model Context Protocol) is a standard that lets AI tools call functions on external servers. Think of it like an API, but designed for AI assistants instead of browsers. When brakit is running, your AI assistant can ask “what security issues exist?” and get back structured data it can act on.

Setup

Add this to your editor's MCP configuration. The exact file depends on your editor:

Claude Code and Cursor both use .mcp.json in your project root:

.mcp.json
{
  "mcpServers": {
    "brakit": {
      "command": "npx",
      "args": ["brakit", "mcp"]
    }
  }
}
Auto-configured: Running npx brakit install automatically creates this .mcp.json file. You may not need to do this manually.

How It Works

The MCP server runs as a separate process alongside your app. It discovers your running brakit instance by reading the port file at .brakit/port, then calls the same dashboard API that powers the browser UI. The AI assistant gets structured data back, not HTML, not screenshots, but actual findings and metrics it can reason about.

Your app (with brakit) → writes .brakit/port
MCP server → reads port → calls dashboard API
AI assistant → calls MCP tool → gets structured findings

Available Tools

The MCP server exposes 6 tools. Each is a function your AI assistant can call:

get_findings
Lists all security issues and performance problems brakit has detected. Filter by severity or state.
get_endpoints
Shows every API endpoint your app has served, with p95 latency, error rate, and query count.
get_request_detail
Deep-dives into a specific request: every query, fetch call, and timeline event.
verify_fix
Checks whether a previously reported issue is resolved. The AI calls this after making a fix.
get_report
Full status report: open vs. resolved counts, top issues, and endpoint health at a glance.
clear_findings
Resets all stored findings for a fresh start. Useful when you want to re-baseline.

The Fix Loop

Here's what a typical AI-assisted fix looks like:

  1. AI calls get_findings , sees 3 open issues
  2. AI calls get_request_detail on the worst one, sees the exact SQL query
  3. AI reads your source code and fixes it
  4. You re-trigger the endpoint (reload the page, resubmit the form)
  5. AI calls verify_fix , confirms the issue is gone
Brakit tracks findings across the open → fixing → resolved lifecycle automatically. If a fix doesn't stick and the issue reappears, it moves back to open, so neither you nor the AI miss a regression.

Finding Lifecycle

Every finding brakit detects goes through a lifecycle:

Open
Issue detected in live traffic
Fixing
Someone (or an AI) is working on it
Resolved
No longer detected. Fix confirmed

Findings persist across app restarts in .brakit/findings.json. Each finding gets a stable ID (SHA-256 hash of the rule, endpoint, and description) so the AI can refer to it consistently.