The LLM Price Index
Every major large language model, normalized to dollars per million tokens. Scored on value, cheapness, and frontier capability. Independent, automated, refreshed daily.
What this is
The LLM Price Index is a live, independently maintained price comparison for every large language model offered through the OpenRouter catalog. Input and output pricing is normalized to dollars per million tokens, blended 3:1 to produce a single comparable figure, and scored 0–10 on three axes: value, cheapness, and frontier capability. The cheapest paid model right now is Gemma 3n 4B at $0.025 per million tokens (3:1 blended).
GPT-5 Pro
4.0/10GPT-5 Pro is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex…
GPT-4o (extended)
4.0/10GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence…
o1
3.9/10The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series…
GPT-4 Turbo
3.9/10The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to…
GPT-4o Search Preview
3.6/10GPT-4o Search Previewis a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries.
GPT-4 Turbo Preview
3.5/10The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec…
GPT-4 Turbo (older v1106)
3.5/10The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to…
GPT-3.5 Turbo (older v0613)
3.5/10GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional…
GPT-3.5 Turbo
3.5/10GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional…
GPT-3.5 Turbo 16k
3.1/10This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single…
GPT-3.5 Turbo Instruct
2.8/10This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.
o1-pro
2.6/10The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model…
GPT-4 (older v0314)
2.3/10GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training…
GPT-4
2.3/10OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due…
Free · No spam
Price drops, new models, deprecations
Every Tuesday: which models got cheaper, which launched, which got pulled. Five minutes. No filler.
Three axes, one overall score
- Value (35%) — capability per dollar. Context length, vision, tools, and structured-output support divided by log-scaled blended price.
- Cheapness (35%) — raw affordability. Free models score 10. Paid models use an inverse log curve anchored at $0.01 / Mtok.
- Frontier (30%) — how close to the state of the art. Recent releases, long context windows, and premium pricing all contribute.
Blended price formula
- Most production workloads are input-heavy, so the index uses a 3:1 blended price:
(input × 0.75) + (output × 0.25). - All prices are normalized to dollars per million tokens. OpenRouter publishes per-token figures which we multiply by 1,000,000 before display.
Where does the pricing data come from?
Every model and price on this page is sourced from OpenRouter's public models API, which aggregates pricing directly from model providers including Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, and dozens of others. The pipeline re-fetches and re-scores the entire catalog once per day.
Why normalize to $/million tokens?
Model providers publish prices in inconsistent units — per 1K tokens, per million tokens, per character, sometimes per request. Comparing them directly is error-prone. Dollars per million tokens is the industry's most common reporting unit and makes cross-provider comparisons immediate and honest.
What does "3:1 blended" mean?
Most production LLM workloads are input-heavy — context, RAG retrievals, system prompts — while output is comparatively short. A 3:1 input:output ratio is the informal industry convention for producing a single number that reflects typical cost: (input × 0.75) + (output × 0.25). Your actual ratio may differ; always check both input and output columns for workloads with long generations.
What's the cheapest LLM right now?
The cheapest paid model as of the latest scan is Gemma 3n 4B from Google at $0.025 per million tokens (3:1 blended). Sort by "Cheapest" above for the full ranking. Many providers also offer free-tier variants of their models, which score a perfect 10 on the cheapness axis.
Is this affiliated with OpenRouter or any provider?
No. MegaOne AI is independent. OpenRouter is used as a public data source because their models API is the most complete and up-to-date LLM catalog available, but this directory is not operated by OpenRouter and we rate all models — including ones that compete with one another.
How often does the price index update?
A full re-fetch, re-score, and daily snapshot runs once per 24 hours. Snapshots are written to a history table so we can build price-over-time charts and detect drops. New models typically appear within 24 hours of being added to OpenRouter.
Is it free to use?
Yes. Browsing, filtering, sorting, and searching the entire price index is free. The weekly email briefing is free. There is no account required and no paywall.