Migrating from OpenRouter to AI Stats Gateway
Move from OpenRouter to AI Stats Gateway with a safe rollout plan, code-level before/after examples, model-ID checks, and validation commands for non-streaming and streaming traffic.
Prerequisites
- Access to your current OpenRouter integration code and deployment config.
- An AI Stats API key stored as `AI_STATS_API_KEY` in each target environment.
- A list of production model IDs currently used by your application.
1) Inventory current OpenRouter usage
Start by identifying every place OpenRouter is referenced: endpoint URLs, keys, model IDs, and provider-specific headers.
Keeping the migration boundary small lowers risk. Usually, updating one gateway client module is enough.
Checklist
- Find `openrouter.ai` endpoint references.
- Find `OPENROUTER_API_KEY` usage in code, CI, and hosting env vars.
- Find OpenRouter-only headers such as `HTTP-Referer` and `X-Title`.
- Document currently used model IDs and any fallback chain logic.
Screenshot checkpoints
Repository search results
Capture your code search showing all OpenRouter references before changes.
Suggested asset path: /migrate/openrouter/01-openrouter-search.png
2) Switch base URL and credentials
Replace the OpenRouter base URL with AI Stats and move authentication to `AI_STATS_API_KEY`.
Keep request payload shape unchanged first. Do behavior parity before optimization.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENROUTER_API_KEY,
baseURL: "https://openrouter.ai/api/v1",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4.1-mini",
messages: [{ role: "user", content: "Summarize our migration plan." }],
});import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.AI_STATS_API_KEY,
baseURL: "https://api.phaseo.app/v1",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4.1-mini",
messages: [{ role: "user", content: "Summarize our migration plan." }],
});# Before
OPENROUTER_API_KEY=...
# After
AI_STATS_API_KEY=...Screenshot checkpoints
Environment variable update
Capture your hosting dashboard after adding `AI_STATS_API_KEY` and removing `OPENROUTER_API_KEY`.
Suggested asset path: /migrate/openrouter/02-env-vars.png
3) Validate model IDs and remove OpenRouter-specific behavior
Do not assume every old model alias is valid. Query `/v1/models` and verify each production model ID.
If your app consumed OpenRouter-specific response fields, adapt at one compatibility layer instead of changing every caller.
Checklist
- Keep `Authorization: Bearer` format unchanged.
- Retain attribution headers only if you still need them for analytics.
- Test both non-streaming and streaming chat paths before rollout.
curl -s "https://api.phaseo.app/v1/models" \
-H "Authorization: Bearer $AI_STATS_API_KEY" | jq '.data[0:10] | map(.id)'const MODEL_ALIASES: Record<string, string> = {
"openai/gpt-4.1-mini": "openai/gpt-4.1-mini",
"anthropic/claude-3.5-sonnet": "anthropic/claude-3.5-sonnet",
};
export function resolveModelId(input: string): string {
return MODEL_ALIASES[input] ?? input;
}4) Roll out safely
Use a staged rollout: dev first, then a small production slice, then full traffic once metrics are stable.
Track latency, error rate, and token/cost drift. Roll back by switching only endpoint+key config if needed.
Checklist
- Start with internal traffic only.
- Move to 5-10% production traffic and compare quality/cost metrics.
- Promote to 100% after parity is confirmed.
Screenshot checkpoints
Production canary metrics
Capture dashboard or logs showing error rate and latency during canary rollout.
Suggested asset path: /migrate/openrouter/03-canary-metrics.png
Validation Steps
curl -s "https://api.phaseo.app/v1/health"curl -s "https://api.phaseo.app/v1/models" -H "Authorization: Bearer $AI_STATS_API_KEY"curl -s "https://api.phaseo.app/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $AI_STATS_API_KEY" -d '{"model":"openai/gpt-4.1-mini","messages":[{"role":"user","content":"Say hello"}]}'Run one streaming request through your app-level integration test.Run one negative test (invalid key or invalid model) to verify failure handling.
FAQ
Need a custom migration diff?
Use the interactive assistant for before/after snippets tailored to your current SDK and language.