Individual benchmark scores plotted by date.
| Organisation | Model | Reported | Top Score | Info | Self Reported | Source |
|---|---|---|---|---|---|---|
| Qwen 2 VL 2B | - | 85.50 | LLM Stats (ZeroEval) | inferred family alias from qwen2-vl-72b (score=0.4083; benches=15) | Yes | Source | |
| Qwen 2 VL 72B | - | 85.50 | LLM Stats (ZeroEval) | Yes | Source | |
| Qwen 2.5 0.5B | - | 84.90 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-vl-7b (score=0.3131; benches=32) | Yes | Source | |
| Qwen 2.5 1.5B | - | 84.90 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-vl-7b (score=0.3131; benches=32) | Yes | Source | |
| Qwen 2.5 3B | - | 84.90 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-vl-7b (score=0.3531; benches=32) | Yes | Source | |
| Qwen 2.5 VL 7B Instruct | - | 84.90 | LLM Stats (ZeroEval) | Yes | Source | |
| Qwen 2.5 Coder 3B | - | 84.40 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-omni-7b (score=0.3000; benches=45) | Yes | Source | |
| Qwen 2.5 Coder 7B | - | 84.40 | LLM Stats (ZeroEval) | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4700; benches=45) | Yes | Source | |
| Qwen 2.5 Math 7B | - | 84.40 | LLM Stats (ZeroEval) | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4767; benches=45) | Yes | Source | |
| Qwen 2.5 Math 7B PRM800K | - | 84.40 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-omni-7b (score=0.3696; benches=45) | Yes | Source | |
| Qwen 2.5 Math PRM 7B | - | 84.40 | LLM Stats (ZeroEval) | inferred family alias from qwen2.5-omni-7b (score=0.4092; benches=45) | Yes | Source | |
| Qwen 2.5 Omni 3B | - | 84.40 | LLM Stats (ZeroEval) | inferred high-confidence family alias from qwen2.5-omni-7b (score=0.4933; benches=45) | Yes | Source | |
| Qwen 2.5 Omni 7B | - | 84.40 | LLM Stats (ZeroEval) | Yes | Source | |
| DeepSeek VL2 | 13 Dec 2024 | 84.20 | LLM Stats (ZeroEval) | Yes | Source | |
| DeepSeek VL2 Small | 13 Dec 2024 | 83.40 | LLM Stats (ZeroEval) | Yes | Source | |
| DeepSeek VL2 Tiny | 13 Dec 2024 | 80.70 | LLM Stats (ZeroEval) | Yes | Source | |
| Grok 1.5V | 12 Apr 2024 | 78.10 | LLM Stats (ZeroEval) | Yes | Source | |
| Phi 4 multimodal instruct | 01 Feb 2025 | 75.60 | LLM Stats (ZeroEval) | Yes | Source | |
| Llama 3.2 90B Vision Instruct | - | 73.50 | LLM Stats (ZeroEval) | inferred high-confidence family alias from llama-3.2-90b-instruct (score=0.5873; benches=13) | Yes | Source | |
| Phi 3 Vision 128K Instruct | - | 72.00 | LLM Stats (ZeroEval) | inferred family alias from phi-3.5-vision-instruct (score=0.3495; benches=9) | Yes | Source | |
| Phi 3.5 vision instruct | 23 Aug 2024 | 72.00 | LLM Stats (ZeroEval) | Yes | Source |