Migrating from LLM Gateway to AI Stats Gateway
Migrate from LLMGateway to AI Stats Gateway using an OpenAI-compatible flow with endpoint and key migration, model checks, and staged rollout validation.
Prerequisites
- Your existing LLM Gateway endpoint and API key configuration.
- `AI_STATS_API_KEY` added to dev, staging, and production.
- A baseline sample of output quality, latency, and error rate.
1) Inventory integration points
Identify the exact files that create and configure your LLM Gateway client.
Map current model IDs, retries, and timeout behavior so you can keep parity after cutover.
Checklist
- Locate all `LLM_GATEWAY_*` env var usage.
- Find all base URL references in runtime config.
- Capture currently active model IDs and fallback chains.
2) Switch endpoint and credentials
Keep payloads unchanged first. Start with a pure endpoint and key migration to reduce risk.
Rename keys to `AI_STATS_API_KEY` across local and deployed environments.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.LLM_GATEWAY_API_KEY,
baseURL: "https://api.llmgateway.io/v1",
});import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.AI_STATS_API_KEY,
baseURL: "https://api.phaseo.app/v1",
});Screenshot checkpoints
Environment key migration
Capture your environment config after replacing `LLM_GATEWAY_API_KEY` with `AI_STATS_API_KEY`.
Suggested asset path: /migrate/llmgateway/01-env-migration.png
3) Validate model compatibility
Query the AI Stats model catalog and verify each model used in production.
If aliases differ, implement a single mapping function at the gateway boundary. LLM Gateway examples often use unprefixed IDs like `gpt-4o`, while AI Stats uses provider-prefixed IDs.
curl -s "https://api.phaseo.app/v1/models" \
-H "Authorization: Bearer $AI_STATS_API_KEY" | jq '.data | length'4) Validate and roll out
Run your golden prompt suite and compare quality, latency, and cost metrics to baseline.
Release behind a canary flag and move from low percentage traffic to full traffic once stable.
Checklist
- Pass non-streaming and streaming tests.
- Verify error handling for invalid key/model scenarios.
- Observe production metrics for one release cycle before removing old config.
Screenshot checkpoints
Canary rollout metrics
Capture your canary dashboard showing latency and error rate before full rollout.
Suggested asset path: /migrate/llmgateway/02-canary-metrics.png
Validation Steps
curl -s "https://api.phaseo.app/v1/health"curl -s "https://api.phaseo.app/v1/models" -H "Authorization: Bearer $AI_STATS_API_KEY"Run one non-streaming and one streaming request in staging.Replay your golden prompts and compare to baseline.
FAQ
Need a custom migration diff?
Use the interactive assistant for before/after snippets tailored to your current SDK and language.