Live · updated 10 hr ago 348 Models Tracked

The LLM Price Index

Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.

Models 348
Providers 20+
Free Tier Models 27
Median Price / Mtok $0.628

What this is

The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).

Qwen3 Next 80B A3B Instruct (free)

7.9/10
Qwen · qwen/qwen3-next-80b-a3b-instruct:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks…

10.0Value
10.0Cheap
3.0Frontier
Reasoning Free Open weights 262K ctx Tools

Qwen Plus 0728 (thinking)

7.1/10
Qwen · qwen/qwen-plus-2025-07-28:thinking
$0.260 Input /Mtok
$0.780 Output /Mtok
$0.390 Blended 3:1

Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed,…

10.0Value
6.8Cheap
4.0Frontier
Reasoning 1.0M ctx Tools

Qwen Plus 0728

7.1/10
Qwen · qwen/qwen-plus-2025-07-28
$0.260 Input /Mtok
$0.780 Output /Mtok
$0.390 Blended 3:1

Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed,…

10.0Value
6.8Cheap
4.0Frontier
Reasoning 1.0M ctx Tools

Qwen3 30B A3B Instruct 2507

7.1/10
Qwen · qwen/qwen3-30b-a3b-instruct-2507
$0.090 Input /Mtok
$0.300 Output /Mtok
$0.143 Blended 3:1

Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is…

10.0Value
7.7Cheap
3.0Frontier
Reasoning Open weights 262K ctx Tools

Qwen3 30B A3B Thinking 2507

6.6/10
Qwen · qwen/qwen3-30b-a3b-thinking-2507
$0.080 Input /Mtok
$0.400 Output /Mtok
$0.160 Blended 3:1

Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for…

9.0Value
7.6Cheap
2.5Frontier
Reasoning Open weights 131K ctx Tools

Qwen3 14B

6.3/10
Qwen · qwen/qwen3-14b
$0.060 Input /Mtok
$0.240 Output /Mtok
$0.105 Blended 3:1

Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It…

8.6Value
8.0Cheap
1.5Frontier
Reasoning Open weights 41K ctx Tools #34 cheapest

Qwen3 Next 80B A3B Instruct

6.2/10
Qwen · qwen/qwen3-next-80b-a3b-instruct
$0.090 Input /Mtok
$1.10 Output /Mtok
$0.343 Blended 3:1

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks…

8.3Value
6.9Cheap
3.0Frontier
Reasoning Open weights 262K ctx Tools

Qwen3 32B

6.1/10
Qwen · qwen/qwen3-32b
$0.080 Input /Mtok
$0.240 Output /Mtok
$0.120 Blended 3:1

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It…

8.2Value
7.8Cheap
1.5Frontier
Reasoning Open weights 41K ctx Tools #39 cheapest

Qwen3 30B A3B

5.9/10
Qwen · qwen/qwen3-30b-a3b
$0.080 Input /Mtok
$0.280 Output /Mtok
$0.130 Blended 3:1

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning,…

7.9Value
7.8Cheap
1.5Frontier
Reasoning Open weights 41K ctx Tools #45 cheapest

Qwen3 Next 80B A3B Thinking

5.9/10
Qwen · qwen/qwen3-next-80b-a3b-thinking
$0.098 Input /Mtok
$0.780 Output /Mtok
$0.268 Blended 3:1

Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step…

7.6Value
7.1Cheap
2.5Frontier
Reasoning Open weights 131K ctx Tools

Qwen3 8B

5.9/10
Qwen · qwen/qwen3-8b
$0.050 Input /Mtok
$0.400 Output /Mtok
$0.138 Blended 3:1

Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It…

7.7Value
7.7Cheap
1.5Frontier
Reasoning Open weights 41K ctx Tools

Qwen3 Max Thinking

5.5/10
Qwen · qwen/qwen3-max-thinking
$0.780 Input /Mtok
$3.90 Output /Mtok
$1.56 Blended 3:1

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly…

5.8Value
5.6Cheap
5.0Frontier
Reasoning 262K ctx Tools

Qwen3 235B A22B Thinking 2507

5.3/10
Qwen · qwen/qwen3-235b-a22b-thinking-2507
$0.150 Input /Mtok
$1.50 Output /Mtok
$0.486 Blended 3:1

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per…

6.4Value
6.6Cheap
2.5Frontier
Reasoning Open weights 131K ctx Tools

QwQ 32B

5.2/10
Qwen · qwen/qwq-32b
$0.150 Input /Mtok
$0.580 Output /Mtok
$0.258 Blended 3:1

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning,…

6.5Value
7.2Cheap
1.5Frontier
Reasoning Open weights 131K ctx Tools

Qwen3 Max

5.0/10
Qwen · qwen/qwen3-max
$0.780 Input /Mtok
$3.90 Output /Mtok
$1.56 Blended 3:1

Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge…

5.1Value
5.6Cheap
4.0Frontier
Reasoning 262K ctx Tools

Qwen3 235B A22B

4.6/10
Qwen · qwen/qwen3-235b-a22b
$0.455 Input /Mtok
$1.82 Output /Mtok
$0.796 Blended 3:1

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between…

4.8Value
6.2Cheap
2.5Frontier
Reasoning Open weights 131K ctx Tools

Free · No spam

Price drops, new models, deprecations

Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.

Three axes, one overall score

  • Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
  • Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
  • Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.

Blended price formula

  • Most production workloads are input-heavy, so the index uses a 3:1 blended price: (input × 0.75) + (output × 0.25).
  • All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.

Where does the pricing data come from?

Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.

Why normalize to $/million tokens?

Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.

What does "3:1 blended" mean?

Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.

What's the cheapest LLM right now?

The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.

Is this affiliated with OpenRouter or any provider?

No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.

How often does the price index update?

A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.

Is it free to use?

Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.