Individual benchmark scores plotted by date.
| Organisation | Model | Reported | Top Score | Info | Self Reported | Source |
|---|---|---|---|---|---|---|
| Qwen 2.5 Math PRM 72B | - | 96.40% | inferred family alias from qwen2.5-vl-72b (score=0.4050; benches=30) | Yes | Source | |
| Qwen 2.5 Math 72B | - | 96.40% | inferred high-confidence family alias from qwen2.5-vl-72b (score=0.4700; benches=30) | Yes | Source | |
| Qwen 2.5 Math RM 72B | - | 96.40% | inferred family alias from qwen2.5-vl-72b (score=0.4092; benches=30) | Yes | Source | |
| Qwen 2.5 VL 72B Instruct | - | 96.40% | - | Yes | Source | |
| Qwen 2.5 VL 7B Instruct | - | 95.70% | - | Yes | Source | |
| Qwen 2.5 3B | - | 95.70% | inferred family alias from qwen2.5-vl-7b (score=0.3531; benches=32) | Yes | Source | |
| Qwen 2.5 0.5B | - | 95.70% | inferred family alias from qwen2.5-vl-7b (score=0.3131; benches=32) | Yes | Source | |
| Qwen 2.5 1.5B | - | 95.70% | inferred family alias from qwen2.5-vl-7b (score=0.3131; benches=32) | Yes | Source | |
| Qwen 2.5 Math 7B PRM800K | - | 95.20% | inferred family alias from qwen2.5-omni-7b (score=0.3696; benches=45) | Yes | Source | |
| Qwen 2.5 Omni 7B | - | 95.20% | - | Yes | Source | |
| Qwen 2.5 Coder 7B | - | 95.20% | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4700; benches=45) | Yes | Source | |
| Qwen 2.5 Math PRM 7B | - | 95.20% | inferred family alias from qwen2.5-omni-7b (score=0.4092; benches=45) | Yes | Source | |
| Qwen 2.5 Coder 3B | - | 95.20% | inferred family alias from qwen2.5-omni-7b (score=0.3000; benches=45) | Yes | Source | |
| Qwen 2.5 Omni 3B | - | 95.20% | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4933; benches=45) | Yes | Source | |
| Qwen 2.5 Math 7B | - | 95.20% | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4767; benches=45) | Yes | Source | |
| Mistral Small 3.2 | 20 Jun 2025 | 94.86% | - | Yes | Source | |
| Qwen 2.5 VL 3B Instruct | - | 94.80% | inferred high-confidence family alias from qwen2.5-vl-32b (score=0.4914; benches=28) | Yes | Source | |
| Qwen 2.5 Coder 32B Instruct | - | 94.80% | inferred high-confidence family alias from qwen2.5-vl-32b (score=0.4641; benches=28) | Yes | Source | |
| Qwen 2.5 VL 32B Instruct | - | 94.80% | - | Yes | Source | |
| Llama 4 Maverick | 05 Apr 2025 | 94.40% | - | Yes | Source | |
| Llama 4 Scout | 05 Apr 2025 | 94.40% | - | Yes | Source | |
| Grok 2 | 13 Aug 2024 | 93.60% | - | Yes | Source | |
| Pixtral Large | 18 Nov 2024 | 93.30% | - | Yes | Source | |
| DeepSeek VL2 | 13 Dec 2024 | 93.30% | - | Yes | Source | |
| Grok 2 Mini | 13 Aug 2024 | 93.20% | - | Yes | Source | |
| Phi 4 multimodal instruct | 01 Feb 2025 | 93.20% | - | Yes | Source | |
| GPT 4o Search Preview | 11 Mar 2025 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Transcribe | 20 Mar 2025 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o (2024-08-06) | 06 Aug 2024 | 92.80% | - | Yes | Source | |
| GPT 4o Realtime Preview (2024-10-01) | 01 Oct 2024 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Audio (2024-12-17) | 17 Dec 2024 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Audio (2025-06-03) | 03 Jun 2025 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Audio (2024-10-01) | 01 Oct 2024 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Transcribe Diarize | 15 Oct 2025 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| GPT 4o Realtime Preview (2025-06-03) | 03 Jun 2025 | 92.80% | inferred modality/version alias from gpt-4o-2024-08-06 | Yes | Source | |
| DeepSeek VL2 Small | 13 Dec 2024 | 92.30% | - | Yes | Source | |
| Pixtral 12B | 17 Sept 2024 | 90.70% | inferred version-family alias from pixtral-12b-2409 | Yes | Source | |
| Llama 3.2 90B Vision Instruct | - | 90.10% | inferred high-confidence family alias from llama-3.2-90b-instruct (score=0.5873; benches=13) | Yes | Source | |
| DeepSeek VL2 Tiny | 13 Dec 2024 | 88.90% | - | Yes | Source | |
| Llama 3.2 11B Vision Instruct | - | 88.40% | inferred high-confidence family alias from llama-3.2-11b-instruct (score=0.5873; benches=11) | Yes | Source | |
| Llama 3.2 1B Instruct | 25 Sept 2024 | 88.40% | inferred family alias from llama-3.2-11b-instruct (score=0.4200; benches=11) | Yes | Source | |
| Grok 1.5 | 28 Mar 2024 | 85.60% | - | Yes | Source | |
| Grok 1.5V | 12 Apr 2024 | 85.60% | - | Yes | Source |