Capabilities, modalities, and lifecycle fields pulled from the model database.
Comparative results across benchmarks shared by the selected models.
| MMLU | 80.5% |
| Mathvista | 67.1% |
| MMLU-Pro | 69.1% |
| SimpleQA | 12.1% |
| AI2D | 92.9% |
| ChartQA | 87.4% |
| DocVQA | 94.9% |
| GPQA | 44.2% |
| MMMU | 62.5% |
| GPQA Diamond | 46.1% |
| HumanEval | 92.9% |
| MATH | 69.4% |
| EQ-Bench 3 | 112650.0% |
Observed provider pricing per million tokens.
No providers found yet.
Maximum input and output token capacity.
Usage and distribution terms.
Model release chronology.
A deeper field-by-field view (including benchmarks, pricing, and links).
| General Information | |
| Context Window | Input: 128,000 Output: 128,000 |
| Modalities | In: Text, Vision Out: Text |
| Reasoning | - |
| Web access | - |
| Parameters | 24.0B |
| Training Tokens | - |
| License | Apache 2.0 |
| Knowledge Cutoff | - |
| Status | Available |
| Release | Jun 2025 |
| Announced | Jun 2025 |
| Deprecation | - |
| Retirement | - |
| Links | |
| Operational Metrics | |
| Cost per 1M Tokens | Input: - Output: - |
| Latency | - |
| Throughput | - |
| Benchmarks | |
| AI2D | |
| ChartQA | |
| DocVQA | |
| EQ-Bench 3 | |
| GPQA | |
| GPQA Diamond | |
| HumanEval | |
| MATH | |
| MMLU | |
| MMLU-Pro | |
| MMMU | |
| Mathvista | |
| SimpleQA | |