Coding Index
Evaluates models' ability to solve programming problems, including those requiring scientific and research domain knowledge.
Filters:
Provider:
Creators:
Price vs Coding Performance
EUGlobal
Drag to zoom ยท Click to pin55 models
| # | Model | Creator | Coding Scoreโ | Price $/1M | Context |
|---|---|---|---|---|---|
| 1 | GPT-5.4RGlobal | OpenAI | 57.3 | $5.63 | 400k |
| 2 | Gemini 3.1 ProRGlobal | 55.5 | $4.50 | 2M | |
| 3 | GPT-5.3 CodexRGlobal | OpenAI | 53.1 | $4.81 | 400k |
| 4 | GPT-5.4 MiniRGlobal | OpenAI | 51.5 | $1.69 | 400k |
| 5 | Claude Sonnet 4.6REU | Anthropic | 50.9 | $6.00 | 200k |
| 6 | GPT-5.2RGlobal | OpenAI | 48.7 | $4.81 | 400k |
| 7 | Claude Opus 4.6REU | Anthropic | 48.1 | $10.00 | 200k |
| 8 | Claude Opus 4.5REU | Anthropic | 47.8 | $10.00 | 200k |
| 9 | Muse SparkRGlobal | Meta | 47.5 | $0.00 | 128k |
| 10 | Gemini 3 ProRGlobal | 46.5 | $4.50 | 2M | |
| 11 | GPT-5.1REU | OpenAI | 44.7 | $3.44 | 400k |
| 12 | GLM-5RGlobal | Z AI | 44.2 | $1.55 | 128k |
| 13 | GPT-5.4 NanoRGlobal | OpenAI | 43.9 | $0.46 | 400k |
| 14 | GLM-5.1RGlobal | Z AI | 43.4 | $2.15 | 128k |
| 15 | Qwen3.6 PlusRGlobal | Alibaba | 42.9 | $1.13 | 128k |
| 16 | Gemini 3 FlashRGlobal | 42.6 | $1.13 | 1M | |
| 17 | MiniMax M2.7RGlobal | MiniMax | 41.9 | $0.53 | 1M |
| 18 | MiMo V2 ProROpenGlobal | Xiaomi | 41.4 | $1.50 | 128k |
| 19 | Qwen3.5 397BROpenGlobal | Alibaba | 41.3 | $1.35 | 128k |
| 20 | Grok 4.20RGlobal | xAI | 40.5 | $3.00 | 256k |
| 21 | Grok 4RGlobal | xAI | 40.5 | $6.00 | 256k |
| 22 | Kimi K2.5RGlobal | Kimi | 39.5 | $1.20 | 128k |
| 23 | Claude Sonnet 4.5REU | Anthropic | 38.6 | $6.00 | 200k |
| 24 | o3REU | OpenAI | 38.4 | $3.50 | 200k |
| 25 | MiniMax M2.5RGlobal | MiniMax | 37.4 | $0.53 | 1M |
| 26 | DeepSeek V3.2ROpenGlobal | DeepSeek | 36.7 | $0.32 | 128k |
| 27 | GPT-5REU | OpenAI | 36 | $3.44 | 400k |
| 28 | GPT-5 MiniREU | OpenAI | 35.3 | $0.69 | 400k |
| 29 | DeepSeek V3.2 (Non-reasoning)OpenGlobal | DeepSeek | 34.6 | $0.32 | 128k |
| 30 | Claude Sonnet 4REU | Anthropic | 34.1 | $6.00 | 200k |
| 31 | Claude Haiku 4.5REU | Anthropic | 32.6 | $2.00 | 200k |
| 32 | Gemini 2.5 ProREU | 31.9 | $3.44 | 1M | |
| 33 | Nemotron 3 SuperROpenGlobal | NVIDIA | 31.2 | $0.41 | 128k |
| 34 | Grok 4.1 FastRGlobal | xAI | 30.9 | $0.28 | 256k |
| 35 | Claude 3.7 SonnetREU | Anthropic | 27.6 | $6.00 | 200k |
| 36 | o4-miniREU | OpenAI | 25.6 | $1.93 | 200k |
| 37 | Mistral Small 4ROpenGlobal | Mistral | 24.3 | $0.26 | 128k |
| 38 | Nova 2 LiteREU | Amazon | 23.4 | $0.85 | 300k |
| 39 | Mistral Large 3OpenGlobal | Mistral | 22.7 | $0.75 | 128k |
| 40 | Gemini 2.5 FlashREU | 22.2 | $0.85 | 1M | |
| 41 | GPT-4.1EU | OpenAI | 21.8 | $3.50 | 1M |
| 42 | o1REU | OpenAI | 20.5 | $26.25 | 200k |
| 43 | GPT-5 NanoREU | OpenAI | 20.3 | $0.14 | 400k |
| 44 | GPT-4.1 MiniEU | OpenAI | 18.5 | $0.70 | 1M |
| 45 | o3-miniREU | OpenAI | 17.3 | $1.93 | 200k |
| 46 | GPT-4oEU | OpenAI | 16.7 | $4.38 | 128k |
| 47 | Llama 4 MaverickOpenGlobal | Meta | 15.6 | $0.47 | 1M |
| 48 | GPT-4.1 NanoEU | OpenAI | 11.2 | $0.17 | 1M |
| 49 | Llama 3.3 70BOpenGlobal | Meta | 10.7 | $0.61 | 128k |
| 50 | Gemini 2.5 Flash-LiteEU | 9.5 | $0.17 | 1M | |
| 51 | GPT-4o MiniEU | OpenAI | - | $0.26 | 128k |
| 52 | SonarGlobal | Perplexity | - | $1.00 | 128k |
| 53 | Sonar ProGlobal | Perplexity | - | $12.00 | 200k |
| 54 | Sonar Reasoning ProRGlobal | Perplexity | - | $6.50 | 128k |
| 55 | Sonar Deep ResearchRGlobal | Perplexity | - | $6.50 | 128k |