Capabilities, modalities, and lifecycle fields pulled from the model database.
Comparative results across benchmarks shared by the selected models.
| MMMLU | 88.9% |
| Humanity's Last Exam | 16.0% |
| GPQA Diamond | 86.9% |
| LiveCodeBench | 72.0% |
| MMMU Pro | 76.8% |
| CharXiv-Reasoning | 73.2% |
| OpenAI MRCR 8 Needle 128k | 60.1% |
| Video MMMU | 84.8% |
| SimpleQA | 43.3% |
| FACTS Benchmark Suite | 40.6% |
| OpenAI MRCR 8 Needle 1m | 12.3% |
Observed provider pricing per million tokens.
All unique meters observed across the selected models.
| Meter | Gemini 3.1 Flash Lite Preview |
|---|---|
| Input Text Tokens | $0.25 |
Providers that expose each model based on observed pricing data.
Maximum input and output token capacity.
Usage and distribution terms.
Model release chronology.
Most recent training data date (when available).
A deeper field-by-field view (including benchmarks, pricing, and links).
| General Information | |
| Context Window | Input: 1,048,576 Output: 65,536 |
| Modalities | In: Text, Vision, Video, Audio Out: Text |
| Reasoning | - |
| Web access | - |
| Parameters | - |
| Training Tokens | - |
| License | Proprietary |
| Knowledge Cutoff | Jan 2025 |
| Status | Available |
| Release | Mar 2026 |
| Announced | Mar 2026 |
| Deprecation | - |
| Retirement | - |
| Links | - |
| Operational Metrics | |
| Cost per 1M Tokens | Input: $0.25 Output: $1.50 |
| Latency | - |
| Throughput | - |
| Benchmarks | |
| CharXiv-Reasoning | |
| FACTS Benchmark Suite | |
| GPQA Diamond | |
| Humanity's Last Exam | |
| LiveCodeBench | |
| MMMLU | |
| MMMU Pro | |
| OpenAI MRCR 8 Needle 128k | |
| OpenAI MRCR 8 Needle 1m | |
| SimpleQA | |
| Video MMMU | |