ab-testing 3 min read

Three Jiras, three GrowthBooks, one MCP pattern

One MCP script + one .env per client + auto-detected auth. The pattern that keeps QLF, Toom, and LAB strictly isolated without ever mixing tenants.

Per-client MCP pattern: same server.py with different per-client .env files produces three scoped tools

The (Mintminds) GrowthBook Toolbox runs across six clients — three of them (QLF, Toom, LAB) on Jira and GrowthBook with their own credentials, their own auth model, and their own custom-field IDs. Three different Jira instances, three different GrowthBook tenants. One agentic dev session has to talk to all six without ever, even once, applying the wrong client’s credentials to the wrong client’s ticket.

The shape that emerged is dumber than my first sketch and safer for it: one script, one MCP entry per client, one .env per client. No branching logic inside the server, no per-tenant conditionals to maintain — every entry is the same script with a different --env-file.

The shape

// .mcp.json
{
  "mcpServers": {
    "jira-client-a": {
      "command": "uvx",
      "args": ["--from", "mcp[cli]", "--with", "httpx", "mcp", "run",
               "scripts/mcp-jira/server.py",
               "--env-file", "src/clients/client-a/.env"]
    },
    "jira-client-b": { /* same script, different --env-file */ },
    "jira-client-c": { /* same script, different --env-file */ }
  }
}

Same script, three entries, three .env files. The MCP client sees three distinct tools — mcp__jira-client-a__*, mcp__jira-client-b__*, mcp__jira-client-c__* — and there’s literally no path inside the running server that can mix them up. The same shape repeats for the three GrowthBook MCPs.

Auth auto-detection

Atlassian Cloud uses email + API token (Basic) and API v3 with Atlassian Document Format for description fields. Atlassian Server / Data Center uses a Bearer PAT and API v2 with plain-text descriptions. Inside the server, the auth flavour is detected by which env keys are present, not by a per-client conditional:

# Bearer PAT (Jira Server / Data Center, API v2)
if env.get("ATLASSIAN_PERSONAL_TOKEN"):
    auth = ("Bearer", env["ATLASSIAN_PERSONAL_TOKEN"])
    api_version = 2

# Basic Auth (Atlassian Cloud, API v3)
elif env.get("ATLASSIAN_API_TOKEN") and env.get("ATLASSIAN_USERNAME"):
    credentials = f"{env['ATLASSIAN_USERNAME']}:{env['ATLASSIAN_API_TOKEN']}"
    auth = ("Basic", base64.b64encode(credentials.encode()).decode())
    api_version = 3

Drop a new client .env with the right variable names, the server picks the right auth model. No new code path, no new conditional.

Per-client .env shape

# src/clients/client-a/.env (Atlassian Cloud)
ATLASSIAN_SUBDOMAIN=your-cloud-subdomain
ATLASSIAN_USERNAME=[email protected]
ATLASSIAN_API_TOKEN=
JIRA_PROJECT_KEY=ABC
HYPOTHESIS_FIELD=customfield_XXXXX

# src/clients/client-b/.env (Atlassian Server)
ATLASSIAN_HOST=jira.example.com
ATLASSIAN_PERSONAL_TOKEN=
JIRA_PROJECT_KEY=XYZ

Custom-field IDs (e.g. the hypothesis field) get pulled from env at request time so the shared server never hardcodes per-instance IDs. Cloud .envs carry the API token; on-prem .envs carry the PAT. The script doesn’t need to know — it just inspects what’s present.

Generator + Jira-driven scaffolding

.mcp.json entries get tedious fast. There’s a small generator at scripts/generate-mcp.js that walks src/clients/*/.env and rebuilds .mcp.json from scratch. Drop a new client folder, run npm run generate:mcp, the entry appears. Keeps the config in sync with reality without any manual edits.

The payoff lands at experiment-scaffold time. A per-client npm run new:experiment:jira:<client> uses these MCPs plus Jira’s REST API to pull the ticket content — title, description, hypothesis field — straight into the experiment’s about.md. So the agent, mid-experiment, has the full Jira context without me ever pasting anything in.


The whole pattern is downstream of one design choice: the script is stateless, and the .env is the only source of per-client config. Once the script can’t know anything about which client it’s serving except by reading the env, every other property falls out of that — no cross-client mixing is possible because there’s no shared state to mix.

Open question I’m sitting with: the cost of this is six MCP servers always running per session — fine when I’m in one client repo at a time, but it does mean idle background connections. There’s probably a way to reduce that cost (a thin router process, or some kind of lazy-load) without losing the per-tenant isolation that makes this safe. I haven’t built it yet, and the trade-off isn’t obvious. This post is a snapshot of the current pattern; if I find a cleaner shape, there’ll be a follow-up.

Happy scaffolding 🙂