Live · updated 10 hr ago 348 Models Tracked

The LLM Price Index

Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.

Models 348
Providers 20+
Free Tier Models 27
Median Price / Mtok $0.628

What this is

The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).

Mistral Nemo

7.2/10
Mistral · mistralai/mistral-nemo
$0.020 Input /Mtok
$0.040 Output /Mtok
$0.025 Blended 3:1

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting…

10.0Value
9.2Cheap
1.5Frontier
Text Open weights 131K ctx Tools #2 cheapest

Devstral Small 1.1

6.6/10
Mistral · mistralai/devstral-small
$0.100 Input /Mtok
$0.300 Output /Mtok
$0.150 Blended 3:1

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All…

9.2Value
7.7Cheap
2.5Frontier
Text Open weights 131K ctx Tools

Devstral 2 2512

5.7/10
Mistral · mistralai/devstral-2512
$0.400 Input /Mtok
$2.00 Output /Mtok
$0.800 Blended 3:1

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting…

6.7Value
6.2Cheap
4.0Frontier
Text Open weights 262K ctx Tools

Mistral Small Creative

5.6/10
Mistral · mistralai/mistral-small-creative
$0.100 Input /Mtok
$0.300 Output /Mtok
$0.150 Blended 3:1

Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and…

6.1Value
7.7Cheap
2.5Frontier
Text 33K ctx Tools

Mistral Small 3

5.0/10
Mistral · mistralai/mistral-small-24b-instruct-2501
$0.050 Input /Mtok
$0.080 Output /Mtok
$0.058 Blended 3:1

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license,…

5.3Value
8.5Cheap
0.5Frontier
Text Open weights 33K ctx #16 cheapest

Saba

4.7/10
Mistral · mistralai/mistral-saba
$0.200 Input /Mtok
$0.600 Output /Mtok
$0.300 Blended 3:1

Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses…

6.0Value
7.1Cheap
0.5Frontier
Text 33K ctx Tools

Mistral Large 2411

4.2/10
Mistral · mistralai/mistral-large-2411
$2.00 Input /Mtok
$6.00 Output /Mtok
$3.00 Blended 3:1

Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade…

4.4Value
5.1Cheap
3.0Frontier
Text 131K ctx Tools

Mixtral 8x7B Instruct

3.9/10
Mistral · mistralai/mixtral-8x7b-instruct
$0.540 Input /Mtok
$0.540 Output /Mtok
$0.540 Blended 3:1

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts…

4.2Value
6.5Cheap
0.5Frontier
Text Open weights 33K ctx Tools

Mixtral 8x22B Instruct

3.6/10
Mistral · mistralai/mixtral-8x22b-instruct
$2.00 Input /Mtok
$6.00 Output /Mtok
$3.00 Blended 3:1

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its…

3.6Value
5.1Cheap
2.0Frontier
Text Open weights 66K ctx Tools

Mistral 7B Instruct v0.1

3.4/10
Mistral · mistralai/mistral-7b-instruct-v0.1
$0.110 Input /Mtok
$0.190 Output /Mtok
$0.130 Blended 3:1

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.

1.8Value
7.8Cheap
0.0Frontier
Text Open weights 3K ctx #46 cheapest

Free · No spam

Price drops, new models, deprecations

Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.

Three axes, one overall score

  • Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
  • Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
  • Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.

Blended price formula

  • Most production workloads are input-heavy, so the index uses a 3:1 blended price: (input × 0.75) + (output × 0.25).
  • All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.

Where does the pricing data come from?

Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.

Why normalize to $/million tokens?

Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.

What does "3:1 blended" mean?

Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.

What's the cheapest LLM right now?

The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.

Is this affiliated with OpenRouter or any provider?

No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.

How often does the price index update?

A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.

Is it free to use?

Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.