Capabilities, modalities, and lifecycle fields pulled from the model database.
Comparative results across benchmarks shared by the selected models.
| ARC-AGI-2 | 1.1% |
| Confabulations | 1456.0% |
| Elimation Game | 635.4% |
| LMArena Text | 141100.0% |
| LMArena WebDev | 140884.0% |
| LiveBench | 69.4% |
| NYT Connections | 49.8% |
| Humanity's Last Exam | 17.7% |
| Ai2 SciArena | 106194.0% |
| AIME 2024 | 91.4% |
| Thematic Generalisation | 174.0% |
| AIME 2025 | 85.5% |
| ARC-AGI-1 | 21.2% |
| GPQA Diamond | 81.0% |
| SimpleBench | 40.8% |
Observed provider pricing per million tokens.
All unique meters observed across the selected models.
| Meter | Deepseek R1 (2025-05-28) |
|---|---|
| Input Text Tokens | $0.70 |
Providers that expose each model based on observed pricing data.
Maximum input and output token capacity.
Usage and distribution terms.
Model release chronology.
A deeper field-by-field view (including benchmarks, pricing, and links).
| General Information | |
| Context Window | Input: 128,000 Output: 64,000 |
| Modalities | In: Text Out: Text |
| Reasoning | - |
| Web access | - |
| Parameters | 671.0B |
| Training Tokens | 14.8T |
| License | MIT |
| Knowledge Cutoff | - |
| Status | Available |
| Release | May 2025 |
| Announced | May 2025 |
| Deprecation | - |
| Retirement | - |
| Links | |
| Operational Metrics | |
| Cost per 1M Tokens | Input: $0.70 Output: $2.50 |
| Latency | - |
| Throughput | - |
| Benchmarks | |
| AIME 2024 | |
| AIME 2025 | |
| ARC-AGI-1 | |
| ARC-AGI-2 | |
| Ai2 SciArena | |
| Confabulations | |
| Elimation Game | |
| GPQA Diamond | |
| Humanity's Last Exam | |
| LMArena Text | |
| LMArena WebDev | |
| LiveBench | |
| NYT Connections | |
| SimpleBench | |
| Thematic Generalisation | |