Meta AI released TRIBE v2 on March 26, 2026, a predictive foundation model trained to simulate how the human brain responds to visual, auditory, and language-based stimuli at high resolution. Built on fMRI data from more than 700 healthy volunteers, the model delivers a 70x resolution increase over comparable systems and supports zero-shot generalization to new subjects, languages, and tasks. Author details were not available at time of publication.
- TRIBE v2 predicts high-resolution fMRI brain activity in response to images, video, podcasts, and text, trained on data from more than 700 healthy volunteers.
- The model achieves a 70x resolution improvement over comparable prior systems, including Meta’s own Algonauts 2025 award-winning predecessor, which trained on just four subjects.
- It supports zero-shot predictions for new subjects, new languages, and novel tasks without additional fine-tuning.
- Meta released model weights, codebase, a research paper, and an interactive demo under a CC BY-NC license.
What Happened
On March 26, 2026, Meta AI published TRIBE v2, a foundation model it describes as a “digital twin of human neural activity.” The release includes trained model weights, a full codebase, an accompanying research paper, and an interactive demonstration platform — all distributed under a CC BY-NC license.
Meta described TRIBE v2 as its “first AI model of human brain responses to sights, sounds, and language,” marking a multimodal expansion over earlier work. The model is designed to predict fMRI brain activity across diverse sensory and linguistic inputs without requiring human subjects for each experimental run.
Why It Matters
TRIBE v2 builds directly on Meta’s Algonauts 2025 award-winning model, which was trained on low-resolution fMRI recordings from just four individuals. That narrow training base limited generalizability across subjects and stimulus types. TRIBE v2 addresses this by scaling to more than 700 healthy volunteers presented with images, videos, podcasts, and written text.
As Meta stated in the announcement, the goal is to allow researchers to “rapidly test hypotheses about its underlying functions without the need for human subjects in every experiment.” The company also framed the release as a resource for improving AI systems “by directly guiding their development from neuroscientific principles.”
Meta cited the potential for TRIBE v2 to support clinical research into neurological conditions affecting hundreds of millions of people worldwide, alongside its use in basic neuroscience. The announcement specifically identified accelerating treatment research as a motivation for releasing the model publicly.
Technical Details
TRIBE v2 was trained on fMRI recordings from more than 700 healthy volunteers presented with a diverse array of media: still images, video content, podcasts, and written text. This multimodal training dataset represents a substantial expansion from the Algonauts 2025 predecessor’s four-subject, low-resolution foundation.
The model achieves a 70x resolution increase in fMRI brain activity prediction compared to similar models, according to Meta’s announcement. It also demonstrates zero-shot capability — generating accurate brain activity predictions for subjects, languages, and tasks not seen during training, without requiring additional fine-tuning. Meta states TRIBE v2 “consistently outperforms standard modeling approaches” across evaluations.
Specific benchmark metrics, evaluation datasets, and full architectural details are reported in the accompanying research paper rather than in the public blog post. The model and code are distributed under a Creative Commons CC BY-NC license, restricting use to non-commercial research applications.
Who’s Affected
Neuroscientists and clinical researchers are the primary intended users. TRIBE v2 allows research teams to test computational hypotheses about neural processing — such as how the brain encodes speech versus visual input — without recruiting and scanning human participants for every study. Meta has made an interactive demo publicly accessible as an entry point before full model integration.
AI developers may also draw from the release. Meta explicitly identified TRIBE v2 as a resource for “applying brain insights to build better AI systems,” pointing to potential applications in architecture design and training signal development informed by human neural activity patterns.
The CC BY-NC license limits commercial use. Organizations seeking to incorporate TRIBE v2 outputs into commercial products or services would need a separate licensing arrangement with Meta.
What’s Next
Model weights, code, and the research paper are available through Meta’s research channels, and the interactive demo is publicly accessible without institutional access requirements. Researchers can begin running simulated brain activity predictions immediately.
The research paper contains detailed methodology, evaluation benchmarks, and comparison metrics not reproduced in Meta’s blog post. Researchers assessing specific performance claims or replicating training procedures should consult the paper directly.
Meta’s announcement noted that training data consisted entirely of healthy volunteers, which may limit direct applicability for populations with neurological conditions. The company identified broader clinical and scientific use as an open area of future development.
Related Reading
- Karpathy’s AutoResearch Agent Runs 700 Experiments in Two Days, Cuts GPT-2 Training Time 11%
- Netflix Releases VOID, Its First Public AI Model for Removing Objects From Video
- Liquid AI Runs 24-Billion-Parameter Model at 50 Tokens Per Second in a Web Browser
- Meta Delays AI Model ‘Avocado’ to May After Failing Internal Benchmarks Against Gemini 3.0
- NVIDIA’s Puzzle NAS Cuts OpenAI’s 120B Model to 88B With 2.82× Speedup