Kandou AI, an AI chip startup founded by former Goldman Sachs managing director Srujan Linga, secured $225 million in a strategic funding round led by SoftBank Group, Bloomberg reported on March 24, 2026. The deal values Kandou AI at approximately $1.8 billion. No direct quotes from Linga or SoftBank executives were disclosed in public reporting available at time of publication.
- SoftBank led a $225M strategic round in Kandou AI, valuing the startup at roughly $1.8 billion.
- Founder Srujan Linga previously served as a managing director at Goldman Sachs before pivoting to semiconductor hardware.
- Kandou AI’s chips target AI inference at the edge — data centers, autonomous vehicles, and industrial automation — not model training.
- The investment extends SoftBank’s multi-hundred-billion-dollar AI infrastructure campaign in the United States.
What Happened
Kandou AI closed a $225 million funding round led by SoftBank Group in late March 2026, with Bloomberg reporting the deal on March 24. The investment assigns the company a valuation of approximately $1.8 billion. The startup was founded by Srujan Linga, who previously held a managing director role at Goldman Sachs — an unusual background in a sector historically dominated by semiconductor engineers.
Why It Matters
The round reflects a structural shift in how investors are positioning around AI hardware. Rather than chasing training-side compute — the market NVIDIA currently dominates — SoftBank’s bet on Kandou AI signals confidence in inference-side demand: the hardware needed to run already-trained models in production at scale. SoftBank has committed over $550 billion to AI-related investments in the US, including a major data center project announced in Ohio in March 2026, making Kandou AI one node in a broader infrastructure thesis.
The firm’s portfolio logic is that deployed production AI will require far more chip volume than model development — and that edge environments, where power and latency constraints are severe, represent an underserved market that general-purpose GPUs are poorly suited to address cost-effectively.
Technical Details
Kandou AI designs custom silicon optimized specifically for AI inference rather than training. The company’s architecture focuses on reducing energy cost per inference operation — the primary constraint in edge deployments across data centers, autonomous vehicles, and industrial automation systems where power budgets are hard limits, not soft targets.
This differs meaningfully from training-optimized hardware: inference workloads involve running fixed model weights repeatedly across variable inputs, which enables hardware specialization that general-purpose GPU architectures cannot fully exploit. Kandou AI’s chips are built to serve production systems where minimizing watts-per-query matters more than maximizing floating-point throughput. No specific performance benchmarks, chip architecture names, or process node specifications were disclosed in available reporting.
Who’s Affected
The funding is most directly relevant to enterprise buyers deploying AI at the edge — industrial manufacturers running vision models on factory floors, autonomous vehicle developers managing onboard inference latency, and data center operators seeking to reduce per-query energy costs at scale. These are the customers Kandou AI is designed to serve, and the $225 million raise will likely accelerate its ability to complete silicon tape-out and reach commercial volume.
Kandou AI competes in a market that has attracted several well-capitalized challengers. Groq has built a Language Processing Unit (LPU) architecture targeting high-throughput, low-latency inference in cloud environments. Cerebras produces wafer-scale processors aimed at large model inference. NVIDIA remains the dominant force across both training and high-end inference through its GPU and NVLink product lines. Kandou AI’s differentiation lies in edge-specific power efficiency rather than raw compute density.
What’s Next
The immediate next step for Kandou AI is converting the capital raise into working silicon — chip tape-out is capital-intensive, and $225 million funds a meaningful portion of that cycle for a startup at this stage. The company has not publicly disclosed a product release timeline, target process node, or benchmark data comparing its hardware to incumbent solutions.
The central unresolved question is whether Kandou AI’s inference-optimized architecture can deliver competitive performance-per-watt results against established players when actual silicon ships. Linga’s finance background shaped the company’s investor narrative around the commercial scale of the inference market, but production competitiveness will ultimately be determined by engineering execution.
Related Reading
- Qualified Health Raises $125M Series B, Reaches 7% of U.S. Hospital Revenue
- Thailand’s Amity Raises $100 Million in Southeast Asia’s Largest Generative AI Round, Targets 2027 IPO
- SoftBank Plans $500 Billion AI Data Center in Ohio with Dedicated Gas-Fired Power Plant
- Founders Fund Leads $2 Billion Valuation for Halter, an AI-Powered Cattle Management Startup