The LLM Price Index
Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.
What this is
The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).
Gemma 4 26B A4B (free)
9.0/10Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per…
Gemma 4 31B (free)
9.0/10Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and image input with text output. Features a 256K…
Qwen3.5-Flash
8.2/10The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts…
Free Models Router
8.2/10The simplest way to get free inference. openrouter/free is a router that selects free models at random from the models available on…
Gemma 4 26B A4B
8.1/10Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per…
Nemotron Nano 12B 2 VL (free)
8.1/10NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a…
Qwen3.5-9B
8.0/10Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an efficient…
Gemma 4 31B
8.0/10Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and image input with text output. Features a 256K…
Qwen3.6 Plus
8.0/10Qwen 3.6 Plus builds on a hybrid architecture that combines efficient linear attention with sparse mixture-of-experts routing, enabling strong scalability and high-performance…
Seed-2.0-Mini
7.8/10Seed-2.0-mini targets latency-sensitive, high-concurrency, and cost-sensitive scenarios, emphasizing fast response and flexible inference deployment. It delivers performance comparable to ByteDance-Seed-1.6, supports 256k…
Qwen3.5 Plus 2026-02-15
7.7/10The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture that integrates linear attention mechanisms with sparse mixture-of-experts models,…
Mistral Small 4
7.7/10Mistral Small 4 is the next major release in the Mistral Small family, unifying the capabilities of several flagship Mistral models into…
Grok 4.1 Fast
7.5/10Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research.…
Gemma 3 27B (free)
7.5/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
Seed 1.6 Flash
7.4/10Seed 1.6 Flash is an ultra-fast multimodal deep thinking model by ByteDance Seed, supporting both text and visual understanding. It features a…
Qwen3.5-35B-A3B
7.4/10The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a sparse…
Ministral 3 8B 2512
7.4/10A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.
Reka Edge
7.4/10Reka Edge is an extremely efficient 7B multimodal vision-language model that accepts image/video+text inputs and generates text outputs. This model is optimized…
Ministral 3 3B 2512
7.4/10The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.
GPT-5.4 Nano
7.4/10GPT-5.4 nano is the most lightweight and cost-efficient variant of the GPT-5.4 family, optimized for speed-critical and high-volume tasks. It supports text…
GPT-4.1 Nano
7.3/10For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance…
Ministral 3 14B 2512
7.3/10The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small…
Grok 4.20
7.3/10Grok 4.20 is xAI's newest flagship model with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on…
Grok 4 Fast
7.2/10Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors:…
Qwen3.5-27B
7.2/10The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechanism, delivering fast response times while balancing inference speed and performance.…
Qwen3 VL 8B Instruct
7.2/10Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It…
Gemma 3 4B (free)
7.2/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
Gemma 3 12B (free)
7.2/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
Qwen3 VL 32B Instruct
7.1/10Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters,…
GPT-5 Nano
7.1/10GPT-5-Nano is the smallest and fastest variant in the GPT-5 system, optimized for developer tools, rapid interactions, and ultra-low latency environments. While…
Mistral Small 3.2 24B
7.0/10Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the…
Nova 2 Lite
7.0/10Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text.…
Mistral Small 3.1 24B
7.0/10Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities.…
Gemma 3 4B
7.0/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
GPT-4.1 Mini
6.9/10GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million…
Llama 4 Maverick
6.9/10Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128…
Seed-2.0-Lite
6.9/10Seed-2.0-Lite is a versatile, cost‑efficient enterprise workhorse that delivers strong multimodal and agent capabilities while offering noticeably lower latency, making it a…
Gemma 3 12B
6.9/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
Nova Lite 1.0
6.9/10Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text…
Qwen3.5-122B-A10B
6.9/10The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts…
Grok 4.20 Multi-Agent
6.8/10Grok 4.20 Multi-Agent is a variant of xAI’s Grok 4.20 designed for collaborative, agent-based workflows. Multiple agents operate in parallel to conduct…
Llama 4 Scout
6.8/10Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a…
Qwen3 VL 30B A3B Instruct
6.8/10Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following…
Qwen3 VL 235B A22B Instruct
6.8/10Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model…
Gemma 3 27B
6.8/10Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages,…
Qwen3.5 397B A17B
6.7/10The Qwen3.5 series 397B-A17B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse…
MiniMax-01
6.5/10MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters…
GPT-5.4
6.5/10GPT-5.4 is OpenAI’s latest frontier model, unifying the Codex and GPT lines into a single system. It features a 1M+ token context…
Free · No spam
Price drops, new models, deprecations
Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.
Three axes, one overall score
- Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
- Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
- Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.
Blended price formula
- Most production workloads are input-heavy, so the index uses a 3:1 blended price:
(input × 0.75) + (output × 0.25). - All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.
Where does the pricing data come from?
Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.
Why normalize to $/million tokens?
Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.
What does "3:1 blended" mean?
Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.
What's the cheapest LLM right now?
The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.
Is this affiliated with OpenRouter or any provider?
No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.
How often does the price index update?
A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.
Is it free to use?
Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.