Tiny Corp is now shipping two tinybox deep learning computers — the red v2, priced at $12,000 and listed as in stock, and the green v2 at $65,000 available as a made-to-order system — with a third model, the Blackwell exabox, announced for 2027. The machines are designed to run the company’s open-source tinygrad neural network framework and ship from San Diego within one week of payment.
- Two models are currently available: the red v2 ($12,000, in stock) with 64GB GPU RAM, and the green v2 ($65,000, made to order) with 384GB GPU RAM
- The green v2 delivers 3,086 TFLOPS FP16 compute via four RTX PRO 6000 Blackwell GPUs with 7,168 GB/s memory bandwidth over a full-fabric PCIe 5.0 x16 interconnect
- Tiny Corp says the tinybox was benchmarked in MLPerf Training 4.0 against systems costing ten times as much, though detailed results are not published on the product page
- The Blackwell exabox, targeting approximately 1 exaflop of performance, is planned for 2027 at an estimated price of $10 million
What Happened
San Diego-based Tiny Corp has begun fulfilling orders for its tinybox line of deep learning hardware, publishing full specifications on its product page. The systems support both model training and inference using the company’s tinygrad framework, which abstracts neural network operations into three core op types: ElementwiseOps, ReduceOps, and MovementOps. Author or executive attribution was not available on the product page at time of publication.
Why It Matters
The tinybox enters a market segment where capable on-premise AI training hardware is largely limited to data center-class equipment sold at enterprise prices. Tiny Corp states in its FAQ that the tinybox is “likely the best performance/$” for deep learning, citing its appearance in the MLPerf Training 4.0 benchmark against systems costing ten times as much — though a detailed breakdown of those results is not published on the product page.
The tinygrad framework underpinning the tinybox already runs in production. It powers the driving model in comma.ai’s openpilot autonomous driving software on the Snapdragon 845 GPU, where it replaced Qualcomm’s SNPE runtime and added support for ONNX file loading, model training, and attention mechanisms that SNPE did not support.
Technical Details
The red v2 pairs four AMD Radeon RX 9070 XT GPUs with a 32-core AMD EPYC CPU. The GPU cluster provides 64GB combined RAM at 2,560 GB/s bandwidth and 778 TFLOPS of FP16 compute with FP32 accumulation, connected over a full-fabric PCIe 4.0 x16 interconnect. System RAM is 128GB at 204.8 GB/s, backed by a 2TB NVMe drive delivering 7.3 GB/s read speeds and a single 1,600W power supply. The system occupies 12U at 16.25 inches deep, weighing 60 to 90 pounds, and supports freestanding or rack-mount installation.
The green v2 uses four RTX PRO 6000 Blackwell GPUs to deliver 384GB combined GPU RAM at 7,168 GB/s bandwidth — a 2.8x improvement in memory capacity and nearly 3x the bandwidth of the red v2 — along with 3,086 TFLOPS FP16. It includes a 32-core AMD GENOA CPU, 192GB system RAM at 460.8 GB/s, 4TB RAID storage plus a 1TB boot drive with 59.3 GB/s combined read bandwidth, dual 1,600W power supplies, and 2x 10GbE networking. GPU interconnect runs over full-fabric PCIe 5.0 x16. The green v2 is a large-scale physical installation: at 20x8x8.5 feet and 20,000 pounds, it requires a permanent concrete slab rather than standard rack infrastructure.
Tiny Corp states the tinybox “was benchmarked in MLPerf Training 4.0 vs computers that cost 10x as much,” and notes that “anything that can train can do inference,” positioning both configurations for dual use across the model development lifecycle.
Who’s Affected
The red v2 is aimed at independent AI researchers, small startups, and engineering teams that need dedicated on-premise GPU compute. Its 64GB GPU memory pool and sub-$15,000 price point make it practical for fine-tuning large language models or training mid-scale vision models without ongoing cloud GPU costs.
The green v2, with 384GB of pooled GPU RAM and PCIe 5.0 interconnect speeds, suits organizations running large-scale training jobs or serving models too large for single-GPU workstations. Developers already working within tinygrad-based pipelines — including those building on openpilot’s autonomous driving stack — can run the same codebase directly on tinybox hardware without adapting to a different runtime.
What’s Next
The Blackwell exabox, listed as coming in 2027, represents a significant scale increase: 720 RDNA5 AT0 XL GPUs across 120 nodes delivering approximately 1 exaflop of FP16 compute, 25,920GB total GPU RAM at 1,244 TB/s bandwidth, 400GbE full-fabric scale-out networking, and 480TB RAID storage. The estimated price is approximately $10 million, and Tiny Corp is accepting mailing list signups for product and inventory updates.
Tiny Corp accepts only wire transfer payments and offers no hardware customization, citing the need to maintain consistent pricing and quality. The company is also actively hiring software engineers and hardware staff, with a stated preference for applicants who have already contributed to the tinygrad open-source project.
Related Reading
- George Hotz Says Closed-Source AI Is Creating a New Feudal Class System
- AI Team OS Turns Claude Code Into an Autonomous Multi-Agent System at Zero API Cost
- OpenAI Plans ChatGPT Superapp Merging Browser, Codex, and AI Research Intern
- Mozilla AI Launches cq, a Shared Knowledge Commons for AI Coding Agents