The LLM Price Index
Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.
What this is
The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).
Qwen3 32B
6.1/10Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It…
Rnj 1 Instruct
6.1/10Rnj-1 is an 8B-parameter, dense, open-weight model family developed by Essential AI and trained from scratch with a focus on programming, math,…
Voxtral Small 24B 2507
6.1/10Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at…
Qwen3 Coder 480B A35B
6.0/10Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as…
Qwen3 VL 30B A3B Thinking
6.0/10Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning…
Llama 3.3 Nemotron Super 49B V1.5
6.0/10Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K context. It’s post-trained for agentic workflows (RAG, tool…
Codestral 2508
6.0/10Mistral's cutting-edge language model for coding released end of July 2025. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code…
Nova Premier 1.0
6.0/10Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher…
Qwen3 30B A3B
5.9/10Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning,…
Spotlight
5.9/10Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers…
Llama Guard 4 12B
5.9/10Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can…
Qwen3 Next 80B A3B Thinking
5.9/10Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step…
Claude 3 Haiku
5.9/10Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement…
Qwen3 8B
5.9/10Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It…
Claude Opus 4.6
5.8/10Opus 4.6 is Anthropic’s strongest model for coding and long-running professional tasks. It is built for agents that operate across entire workflows…
Claude Haiku 4.5
5.8/10Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of…
GPT-5.3-Codex
5.8/10GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, combining the frontier software engineering performance of GPT-5.2-Codex with the broader reasoning and professional…
Grok Code Fast 1
5.8/10Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the…
GLM 4.7
5.8/10GLM-4.7 is Z.ai’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates…
Kimi K2 Thinking
5.8/10Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built…
DeepSeek V3.2 Exp
5.8/10DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek…
GLM 4.5 Air
5.7/10GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts…
INTELLECT-3
5.7/10INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It…
Claude Sonnet 4.5
5.7/10Claude Sonnet 4.5 is Anthropic’s most advanced Sonnet model to date, optimized for real-world agents and coding workflows. It delivers state-of-the-art performance…
Devstral 2 2512
5.7/10Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting…
MiniMax M2
5.7/10MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion…
Nemotron Nano 12B 2 VL
5.7/10NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a…
GPT-5 Image Mini
5.7/10GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://openrouter.ai/openai/gpt-5-mini), with GPT Image 1 Mini for efficient image generation. This…
MiniMax M2.1
5.7/10MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated…
Command R (08-2024)
5.6/10command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it…
LongCat Flash Chat
5.6/10LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input.…
Grok 3 Mini
5.6/10A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The…
DeepSeek V3.1 Terminus
5.6/10DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language…
GPT-5.1-Codex-Max
5.6/10GPT-5.1-Codex-Max is OpenAI’s latest agentic coding model, designed for long-running, high-context software development tasks. It is based on an updated version of…
GPT-5.1
5.6/10GPT-5.1 is the latest frontier-grade model in the GPT-5 series, offering stronger general-purpose reasoning, improved instruction adherence, and a more natural conversational…
GPT-5.1-Codex
5.6/10GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions…
GLM 5 Turbo
5.6/10GLM-5 Turbo is a new model from Z.ai designed for fast inference and strong performance in agent-driven environments such as OpenClaw scenarios.…
Mistral Small Creative
5.6/10Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and…
Claude Opus 4.6 (Fast)
5.6/10Fast-mode variant of [Opus 4.6](/anthropic/claude-opus-4.6) - identical capabilities with higher output speed at premium 6x pricing. Learn more in Anthropic's docs: https://platform.claude.com/docs/en/build-with-claude/fast-mode
o4 Mini High
5.6/10OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the…
o4 Mini
5.6/10OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities.…
Mercury
5.6/10Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even…
Mercury Coder
5.6/10Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than…
Palmyra X5
5.5/10Palmyra X5 is Writer's most advanced model, purpose-built for building and scaling AI agents across the enterprise. It delivers industry-leading speed and…
Olmo 2 32B Instruct
5.5/10OLMo-2 32B Instruct is a supervised instruction-finetuned variant of the OLMo-2 32B March 2025 base model. It excels in complex reasoning and…
Qwen3 Max Thinking
5.5/10Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly…
ERNIE 4.5 21B A3B
5.5/10A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through…
Mistral Medium 3.1
5.5/10Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level…
Free · No spam
Price drops, new models, deprecations
Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.
Three axes, one overall score
- Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
- Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
- Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.
Blended price formula
- Most production workloads are input-heavy, so the index uses a 3:1 blended price:
(input × 0.75) + (output × 0.25). - All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.
Where does the pricing data come from?
Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.
Why normalize to $/million tokens?
Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.
What does "3:1 blended" mean?
Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.
What's the cheapest LLM right now?
The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.
Is this affiliated with OpenRouter or any provider?
No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.
How often does the price index update?
A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.
Is it free to use?
Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.