Live · updated 8 hr ago 348 Models Tracked

The LLM Price Index

Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.

Models 348
Providers 20+
Free Tier Models 27
Median Price / Mtok $0.628

What this is

The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).

Nemotron 3 Super (free)

8.7/10
NVIDIA · nvidia/nemotron-3-super-120b-a12b:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in…

10.0Value
10.0Cheap
5.5Frontier
Text Free Open weights 262K ctx Tools Reasoning

MiniMax M2.5 (free)

8.5/10
Minimax · minimax/minimax-m2.5:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments,…

10.0Value
10.0Cheap
5.0Frontier
Text Free Open weights 197K ctx Tools Reasoning

Nemotron 3 Nano 30B A3B (free)

8.2/10
NVIDIA · nvidia/nemotron-3-nano-30b-a3b:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build…

10.0Value
10.0Cheap
4.0Frontier
Text Free Open weights 256K ctx Tools Reasoning

Trinity Large Preview (free)

8.1/10
Arcee Ai · arcee-ai/trinity-large-preview:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

Trinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using…

10.0Value
10.0Cheap
3.5Frontier
Text Free Open weights 131K ctx Tools

LFM2.5-1.2B-Instruct (free)

7.8/10
Liquid · liquid/lfm-2.5-1.2b-instruct:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint,…

10.0Value
10.0Cheap
2.5Frontier
Text Free Open weights 33K ctx

gpt-oss-20b (free)

7.8/10
OpenAI · openai/gpt-oss-20b:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with…

10.0Value
10.0Cheap
2.5Frontier
Text Free Open weights 131K ctx Tools Reasoning

GLM 4.5 Air (free)

7.8/10
Z Ai · z-ai/glm-4.5-air:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts…

10.0Value
10.0Cheap
2.5Frontier
Text Free Open weights 131K ctx Tools Reasoning

Granite 4.0 Micro

7.6/10
Ibm Granite · ibm-granite/granite-4.0-h-micro
$0.017 Input /Mtok
$0.110 Output /Mtok
$0.040 Blended 3:1

Granite-4.0-H-Micro is a 3B parameter from the Granite 4 family of models. These models are the latest in a series of models…

9.9Value
8.8Cheap
3.5Frontier
Text Open weights 131K ctx #6 cheapest

Nemotron 3 Nano 30B A3B

7.5/10
NVIDIA · nvidia/nemotron-3-nano-30b-a3b
$0.050 Input /Mtok
$0.200 Output /Mtok
$0.088 Blended 3:1

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build…

10.0Value
8.1Cheap
4.0Frontier
Text Open weights 262K ctx Tools Reasoning #28 cheapest

Uncensored (free)

7.5/10
Cognitivecomputations · cognitivecomputations/dolphin-mistral-24b-venice-edition:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model…

10.0Value
10.0Cheap
1.5Frontier
Text Free Open weights 33K ctx

MiMo-V2-Flash

7.4/10
Xiaomi · xiaomi/mimo-v2-flash
$0.090 Input /Mtok
$0.290 Output /Mtok
$0.140 Blended 3:1

MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active…

10.0Value
7.7Cheap
4.0Frontier
Text Open weights 262K ctx Tools Reasoning

GLM 4.7 Flash

7.4/10
Z Ai · z-ai/glm-4.7-flash
$0.060 Input /Mtok
$0.400 Output /Mtok
$0.145 Blended 3:1

As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding…

10.0Value
7.7Cheap
4.0Frontier
Text Open weights 203K ctx Tools Reasoning

Gemma 3n 2B (free)

7.3/10
Google · google/gemma-3n-e2b-it:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size…

10.0Value
10.0Cheap
1.0Frontier
Text Free Open weights 8K ctx

Gemma 3n 4B (free)

7.3/10
Google · google/gemma-3n-e4b-it:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal…

10.0Value
10.0Cheap
1.0Frontier
Text Free Open weights 8K ctx

Qwen3 235B A22B Instruct 2507

7.3/10
Qwen · qwen/qwen3-235b-a22b-2507
$0.071 Input /Mtok
$0.100 Output /Mtok
$0.078 Blended 3:1

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is…

10.0Value
8.2Cheap
3.0Frontier
Text Open weights 262K ctx Tools Reasoning #26 cheapest

Nemotron 3 Super

7.3/10
NVIDIA · nvidia/nemotron-3-super-120b-a12b
$0.100 Input /Mtok
$0.500 Output /Mtok
$0.200 Blended 3:1

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in…

8.6Value
7.4Cheap
5.5Frontier
Text Open weights 262K ctx Tools Reasoning

gpt-oss-20b

7.2/10
OpenAI · openai/gpt-oss-20b
$0.030 Input /Mtok
$0.140 Output /Mtok
$0.058 Blended 3:1

gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with…

10.0Value
8.5Cheap
2.5Frontier
Text Open weights 131K ctx Tools Reasoning #15 cheapest

Step 3.5 Flash

7.2/10
Stepfun · stepfun/step-3.5-flash
$0.100 Input /Mtok
$0.300 Output /Mtok
$0.150 Blended 3:1

Step 3.5 Flash is StepFun's most capable open-source foundation model. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates…

9.5Value
7.7Cheap
4.0Frontier
Text Open weights 262K ctx Tools Reasoning

Gemma 3n 4B

7.2/10
Google · google/gemma-3n-e4b-it
$0.020 Input /Mtok
$0.040 Output /Mtok
$0.025 Blended 3:1

Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal…

10.0Value
9.2Cheap
1.5Frontier
Text Open weights 33K ctx #1 cheapest

Mistral Nemo

7.2/10
Mistral · mistralai/mistral-nemo
$0.020 Input /Mtok
$0.040 Output /Mtok
$0.025 Blended 3:1

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting…

10.0Value
9.2Cheap
1.5Frontier
Text Open weights 131K ctx Tools #2 cheapest

Llama 3.3 70B Instruct (free)

7.2/10
Meta · meta-llama/llama-3.3-70b-instruct:free
Free Input /Mtok
Free Output /Mtok
Free Blended 3:1

The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out).…

10.0Value
10.0Cheap
0.5Frontier
Text Free Open weights 66K ctx Tools

Llama Guard 3 8B

7.1/10
Meta · meta-llama/llama-guard-3-8b
$0.020 Input /Mtok
$0.060 Output /Mtok
$0.030 Blended 3:1

Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to…

10.0Value
9.1Cheap
1.5Frontier
Text Open weights 131K ctx #4 cheapest

Qwen-Turbo

6.9/10
Qwen · qwen/qwen-turbo
$0.033 Input /Mtok
$0.130 Output /Mtok
$0.057 Blended 3:1

Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.

10.0Value
8.5Cheap
1.5Frontier
Text 131K ctx Tools #14 cheapest

Nova Micro 1.0

6.9/10
Amazon · amazon/nova-micro-v1
$0.035 Input /Mtok
$0.140 Output /Mtok
$0.061 Blended 3:1

Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at…

10.0Value
8.4Cheap
1.5Frontier
Text 128K ctx Tools #18 cheapest

GLM 4 32B

6.8/10
Z Ai · z-ai/glm-4-32b
$0.100 Input /Mtok
$0.100 Output /Mtok
$0.100 Blended 3:1

GLM 4 32B is a cost-effective foundation language model. It can efficiently perform complex tasks and has significantly enhanced capabilities in tool…

9.2Value
8.0Cheap
2.5Frontier
Text 128K ctx Tools #32 cheapest

Llama 3.1 8B Instruct

6.7/10
Meta · meta-llama/llama-3.1-8b-instruct
$0.020 Input /Mtok
$0.050 Output /Mtok
$0.028 Blended 3:1

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and…

10.0Value
9.1Cheap
0.0Frontier
Text Open weights 16K ctx Tools #3 cheapest

MiMo-V2-Pro

6.7/10
Xiaomi · xiaomi/mimo-v2-pro
$1.00 Input /Mtok
$3.00 Output /Mtok
$1.50 Blended 3:1

MiMo-V2-Pro is Xiaomi's flagship foundation model, featuring over 1T total parameters and a 1M context length, deeply optimized for agentic scenarios. It…

7.0Value
5.7Cheap
7.5Frontier
Text 1.0M ctx Tools Reasoning

Llama 3 8B Instruct

6.6/10
Meta · meta-llama/llama-3-8b-instruct
$0.030 Input /Mtok
$0.040 Output /Mtok
$0.033 Blended 3:1

Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for…

10.0Value
9.0Cheap
0.0Frontier
Text Open weights 8K ctx Tools #5 cheapest

Devstral Small 1.1

6.6/10
Mistral · mistralai/devstral-small
$0.100 Input /Mtok
$0.300 Output /Mtok
$0.150 Blended 3:1

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All…

9.2Value
7.7Cheap
2.5Frontier
Text Open weights 131K ctx Tools

Qwen-Plus

6.6/10
Qwen · qwen/qwen-plus
$0.260 Input /Mtok
$0.780 Output /Mtok
$0.390 Blended 3:1

Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.

9.6Value
6.8Cheap
3.0Frontier
Text 1.0M ctx Tools

Qwen2.5 7B Instruct

6.5/10
Qwen · qwen/qwen-2.5-7b-instruct
$0.040 Input /Mtok
$0.100 Output /Mtok
$0.055 Blended 3:1

Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge…

9.7Value
8.5Cheap
0.5Frontier
Text Open weights 33K ctx Tools #13 cheapest

MiniMax M2.5

6.4/10
Minimax · minimax/minimax-m2.5
$0.118 Input /Mtok
$0.990 Output /Mtok
$0.336 Blended 3:1

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments,…

7.1Value
7.0Cheap
5.0Frontier
Text Open weights 197K ctx Tools Reasoning

DeepSeek V3.1 Nex N1

6.4/10
Nex Agi · nex-agi/deepseek-v3.1-nex-n1
$0.135 Input /Mtok
$0.500 Output /Mtok
$0.226 Blended 3:1

DeepSeek V3.1 Nex-N1 is the flagship release of the Nex-N1 series — a post-trained model designed to highlight agent autonomy, tool use,…

8.0Value
7.3Cheap
3.5Frontier
Text Open weights 131K ctx Tools

Tongyi DeepResearch 30B A3B

6.4/10
Alibaba · alibaba/tongyi-deepresearch-30b-a3b
$0.090 Input /Mtok
$0.450 Output /Mtok
$0.180 Blended 3:1

Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per…

8.6Value
7.5Cheap
2.5Frontier
Text Open weights 131K ctx Tools Reasoning

Llama 3.3 70B Instruct

6.3/10
Meta · meta-llama/llama-3.3-70b-instruct
$0.100 Input /Mtok
$0.320 Output /Mtok
$0.155 Blended 3:1

The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out).…

9.1Value
7.6Cheap
1.5Frontier
Text Open weights 131K ctx Tools

GLM 5.1

6.3/10
Z Ai · z-ai/glm-5.1
$0.950 Input /Mtok
$3.15 Output /Mtok
$1.50 Blended 3:1

GLM-5.1 delivers a major leap in coding capability, with particularly significant gains in handling long-horizon tasks. Unlike previous models built around minute-level…

5.9Value
5.7Cheap
7.5Frontier
Text Open weights 203K ctx Tools Reasoning

Solar Pro 3

6.2/10
Upstage · upstage/solar-pro-3
$0.150 Input /Mtok
$0.600 Output /Mtok
$0.263 Blended 3:1

Solar Pro 3 is Upstage's powerful Mixture-of-Experts (MoE) language model. With 102B total parameters and 12B active parameters per forward pass, it…

7.6Value
7.2Cheap
3.5Frontier
Text 128K ctx Tools Reasoning

MiniMax M2.7

6.2/10
Minimax · minimax/minimax-m2.7
$0.300 Input /Mtok
$1.20 Output /Mtok
$0.525 Blended 3:1

MiniMax-M2.7 is a next-generation large language model designed for autonomous, real-world productivity and continuous improvement. Built to actively participate in its own…

6.5Value
6.6Cheap
5.5Frontier
Text Open weights 205K ctx Tools Reasoning

LFM2-24B-A2B

6.1/10
Liquid · liquid/lfm-2-24b-a2b
$0.030 Input /Mtok
$0.120 Output /Mtok
$0.053 Blended 3:1

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter…

5.6Value
8.6Cheap
4.0Frontier
Text Open weights 33K ctx #12 cheapest

DeepSeek V3.2 Exp

5.8/10
DeepSeek · deepseek/deepseek-v3.2-exp
$0.270 Input /Mtok
$0.410 Output /Mtok
$0.305 Blended 3:1

DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek…

7.3Value
7.0Cheap
2.5Frontier
Text Open weights 164K ctx Tools Reasoning

GLM 4.5 Air

5.7/10
Z Ai · z-ai/glm-4.5-air
$0.130 Input /Mtok
$0.850 Output /Mtok
$0.310 Blended 3:1

GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts…

7.2Value
7.0Cheap
2.5Frontier
Text Open weights 131K ctx Tools Reasoning

INTELLECT-3

5.7/10
Prime Intellect · prime-intellect/intellect-3
$0.200 Input /Mtok
$1.10 Output /Mtok
$0.425 Blended 3:1

INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It…

6.6Value
6.7Cheap
3.5Frontier
Text Open weights 131K ctx Tools Reasoning

Devstral 2 2512

5.7/10
Mistral · mistralai/devstral-2512
$0.400 Input /Mtok
$2.00 Output /Mtok
$0.800 Blended 3:1

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting…

6.7Value
6.2Cheap
4.0Frontier
Text Open weights 262K ctx Tools

MiniMax M2.1

5.7/10
Minimax · minimax/minimax-m2.1
$0.290 Input /Mtok
$0.950 Output /Mtok
$0.455 Blended 3:1

MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated…

6.5Value
6.7Cheap
3.5Frontier
Text Open weights 197K ctx Tools Reasoning

LongCat Flash Chat

5.6/10
Meituan · meituan/longcat-flash-chat
$0.200 Input /Mtok
$0.800 Output /Mtok
$0.350 Blended 3:1

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input.…

7.0Value
6.9Cheap
2.5Frontier
Text Open weights 131K ctx Tools

DeepSeek V3.1 Terminus

5.6/10
DeepSeek · deepseek/deepseek-v3.1-terminus
$0.210 Input /Mtok
$0.790 Output /Mtok
$0.355 Blended 3:1

DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language…

7.0Value
6.9Cheap
2.5Frontier
Text Open weights 164K ctx Tools Reasoning

GLM 5 Turbo

5.6/10
Z Ai · z-ai/glm-5-turbo
$1.20 Input /Mtok
$4.00 Output /Mtok
$1.90 Blended 3:1

GLM-5 Turbo is a new model from Z.ai designed for fast inference and strong performance in agent-driven environments such as OpenClaw scenarios.…

4.9Value
5.4Cheap
6.5Frontier
Text 203K ctx Tools Reasoning

Mistral Small Creative

5.6/10
Mistral · mistralai/mistral-small-creative
$0.100 Input /Mtok
$0.300 Output /Mtok
$0.150 Blended 3:1

Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and…

6.1Value
7.7Cheap
2.5Frontier
Text 33K ctx Tools

Free · No spam

Price drops, new models, deprecations

Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.

Three axes, one overall score

  • Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
  • Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
  • Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.

Blended price formula

  • Most production workloads are input-heavy, so the index uses a 3:1 blended price: (input × 0.75) + (output × 0.25).
  • All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.

Where does the pricing data come from?

Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.

Why normalize to $/million tokens?

Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.

What does "3:1 blended" mean?

Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.

What's the cheapest LLM right now?

The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.

Is this affiliated with OpenRouter or any provider?

No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.

How often does the price index update?

A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.

Is it free to use?

Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.