Capabilities, modalities, and lifecycle fields pulled from the model database.
Comparative results across benchmarks shared by the selected models.
| Elimation Game | 491.4% |
| Confabulations | 1592.0% |
| EQ-Bench 3 | 129000.0% |
| ARC-AGI-1 | 22.5% |
| LiveBench | 65.9% |
| GPQA Diamond | 83.3% |
| NYT Connections | 34.8% |
| Thematic Generalisation | 169.0% |
| LMArena WebDev | 140551.0% |
| ARC-AGI-2 | 0.0% |
| Ai2 SciArena | 108045.0% |
| Aider-Polyglot | 70.7% |
| LMArena Text | 141400.0% |
| SimpleBench | 58.8% |
Observed provider pricing per million tokens.
All unique meters observed across the selected models.
| Meter | Claude Opus 4 |
|---|---|
| Input Text Tokens | $15.00 |
Providers that expose each model based on observed pricing data.
Plans that include each selected model, grouped by organisation.
3 plans
Maximum input and output token capacity.
Usage and distribution terms.
Model release chronology.
Most recent training data date (when available).
A deeper field-by-field view (including benchmarks, pricing, and links).
| General Information | |
| Context Window | Input: 200,000 Output: 32,000 |
| Modalities | In: Text, Vision Out: Text |
| Reasoning | - |
| Web access | - |
| Parameters | - |
| Training Tokens | - |
| License | Proprietary |
| Knowledge Cutoff | Mar 2025 |
| Status | Available |
| Release | May 2025 |
| Announced | May 2025 |
| Deprecation | - |
| Retirement | - |
| Links | |
| Operational Metrics | |
| Cost per 1M Tokens | Input: $15.00 Output: $75.00 |
| Latency | - |
| Throughput | - |
| Benchmarks | |
| ARC-AGI-1 | |
| ARC-AGI-2 | |
| Ai2 SciArena | |
| Aider-Polyglot | |
| Confabulations | |
| EQ-Bench 3 | |
| Elimation Game | |
| GPQA Diamond | |
| LMArena Text | |
| LMArena WebDev | |
| LiveBench | |
| NYT Connections | |
| SimpleBench | |
| Thematic Generalisation | |