World's largest unified AI gateway.Fully open source.
One API for chat, vision, audio, and embeddings with live telemetry, compliance controls, and provider failover built in.
Supported providers
Working to lower pricing + increase model offering
I am working as hard as I can to lower pricing and will do so at every opportunity; I'm a solo developer, but this is something I'm actively addressing as well as looking to expand the gateway substantially as soon as possible.
Why teams choose the AI Stats Gateway
We deliver an open-source routing, telemetry, and compliance stack so teams can move faster without rebuilding plumbing.
- Health data gates and failover logic flip settings across providers in seconds.
- Community contributions ship adapters quicker, with every change public and open for review.
- Open source transparency enables self-auditing of security, compliance, and routing logic.
Full transparency through open source code builds trust - audit our security, compliance, and routing logic yourself.
How we compare
Optimised routing and observability are built in - no homegrown adapters, no hidden markups.
| Capability | AI Stats Gateway | OpenRouter | Vercel AI SDK |
|---|---|---|---|
| Model coverage | Largest verified catalogue across text, image, video, audio updated nightly. | Large, varies by provider availability. | Bring your own adapters. |
| Routing intelligence | Latency and error-aware routing with deterministic fallbacks and breaker states. | Priority ordering per request. | Minimal routing across a few providers. |
| Observability | Live dashboards, exportable logs, anomaly alerts, and cost tracking built in. | Minimal analytics (requests and spend). | Require self-managed telemetry. |
| Pricing model | Automatic sliding scale from 10% down to 7.5% depending on usage, with no per-request add-ons. | Flat 5.5% gateway fee. | 0% gateway fee but no multi-provider routing. |
Every endpoint, one schema
Swap endpoints and models without touching your integration. Use the TypeScript SDK (`@ai-stats/ts-sdk`) or Python SDK (`ai-stats-py-sdk`), or call the Gateway directly from cURL/fetch.
Chat, reasoning, and tool-calling with instant provider failover.
OpenAI options (Beta) may change as adapters mature.
Base URL: https://api.ai-stats.phaseo.app/v1/chat/completions
Model
Every model uses the same base request payload—swap the `model` value.
curl -s -X POST "https://api.ai-stats.phaseo.app/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_GATEWAY_KEY" \
-d '{
"model": "openai/gpt-4-1-2025-04-14",
"messages": [
{
"role": "system",
"content": "You are an AI operations assistant helping summarise live gateway telemetry."
},
{
"role": "user",
"content": "Summarise the last 24 hours of latency and throughput for our release notes."
}
]
}'Predictable pricing, with an automatic sliding scale
Every top-up covers the raw provider bill plus our gateway fee, which starts at 10% and steps down toward 7.5% as usage grows. This adaptive scale keeps fees aligned with your runway so higher-volume programs benefit from faster routing and deeper savings.
- Transparent fees: deposit once, route freely.
- Provider-aligned bills with no hidden multipliers.
- Usage exports for finance and RevOps teams - Coming Soon.
- Per-key spend limits keep experiments safe - Beta.
| Scenario | Gateway fee |
|---|
Reliability you can trust. Openness you can verify.
Latency, uptime, and throughput telemetry feed directly into routing. Every adapter, health probe, and ingestion script lives under an open source licence.
Healthy token volume and stability driven by community contributions and enterprise adoption.
Ready for the gateway
Get started in minutes or help shape the code.
Spin up routing, reuse providers, and see the code that keeps everything observable and open.