Capabilities, modalities, and lifecycle fields pulled from the model database.
Comparative results across benchmarks shared by the selected models.
| HMMT 2025 | 99.4% |
| AIME 2025 | 100.0% |
| Frontier Math | 14.6% |
| ARC-AGI-1 | 12.3% |
| ARC-AGI-2 | 0.8% |
| SWE Bench Pro | 55.6% |
| SWE-Bench | 80.0% |
| SWE-Lancer | 74.6% |
| Humanity's Last Exam | 34.5% |
| MMMLU | 89.6% |
| GPQA Diamond | 92.4% |
Observed provider pricing per million tokens.
All unique meters observed across the selected models.
| Meter | GPT 5.2 |
|---|---|
| Input Text Tokens | $1.75 |
Providers that expose each model based on observed pricing data.
Usage and distribution terms.
Model release chronology.
A deeper field-by-field view (including benchmarks, pricing, and links).
| General Information | |
| Context Window | Input: - Output: - |
| Modalities | In: Text, Vision Out: Text |
| Reasoning | - |
| Web access | - |
| Parameters | - |
| Training Tokens | - |
| License | Proprietary |
| Knowledge Cutoff | - |
| Status | Available |
| Release | Dec 2025 |
| Announced | Dec 2025 |
| Deprecation | - |
| Retirement | - |
| Links | |
| Operational Metrics | |
| Cost per 1M Tokens | Input: $1.75 Output: $14.00 |
| Latency | - |
| Throughput | - |
| Benchmarks | |
| AIME 2025 | |
| ARC-AGI-1 | |
| ARC-AGI-2 | |
| Frontier Math | |
| GPQA Diamond | |
| HMMT 2025 | |
| Humanity's Last Exam | |
| MMMLU | |
| SWE Bench Pro | |
| SWE-Bench | |
| SWE-Lancer | |