Common questions about AI Stats Gateway. Cannot find an answer? Reach out to our team.
AI Stats gives you one OpenAI-compatible surface for model routing, provider failover, pricing context, and observability. The point is not just access to more models. It is being able to swap providers, compare real costs, and keep production traffic stable without rebuilding your integration every time the market moves.
Create an account, add credits if you want managed billing, generate an API key, and point your existing OpenAI-compatible client at AI Stats Gateway. If you already have provider keys, you can also bring your own keys and keep your provider billing directly under your control.
Managed usage is billed from your AI Stats credits using the model and provider pricing shown in the catalog. If you bring your own provider keys, the upstream inference cost stays with that provider and AI Stats only applies the documented gateway fee where relevant. The goal is that pricing stays inspectable rather than hidden behind blended markups.
Yes. BYOK is a first-class path. You can attach your own provider credentials, keep provider-side billing under your control, and still use the same routing, health, and policy layer on top.
The database and gateway cover chat, embeddings, image, audio, video, moderation, and related model capabilities across a large provider set. Support is surfaced per provider and per model, so you can see exactly what is available before routing production traffic.
New models are added on a rolling basis as providers release them and as we verify pricing, capabilities, and metadata. High-interest frontier releases are usually prioritised quickly, but accuracy beats rushing incomplete catalog rows into production.
Routing decisions are made request by request using provider health, latency, cost, capability, and your policies. If a provider degrades or errors, AI Stats can fall through to the next eligible provider without requiring a new client integration or a manual operational response.
AI Stats is designed around an OpenAI-compatible request shape, so existing OpenAI-style SDKs and tools can usually be moved across with minimal changes. On top of that, we publish our own SDKs and provider adapters where teams want stronger typing or more direct gateway features.
We log the operational metadata needed to route, audit, and bill requests correctly. Prompt or completion logging should never be treated as implicit. Where logging behaviour is configurable, it should be explicit and visible in the product rather than assumed.
For product help, implementation questions, or data issues, use the docs, GitHub issues, or contact links in the footer. If something in the model database looks wrong, reporting it directly is the fastest way to get it reviewed.