- OpenAI opened applications on April 6, 2026 for a pilot fellowship funding independent safety and alignment research by external researchers.
- The program runs September 14, 2026 through February 5, 2027, with fellows based at Constellation in Berkeley or working remotely.
- Priority research areas include agentic oversight, safety evaluation, robustness, and privacy-preserving safety methods.
- Fellows receive a monthly stipend, compute support, and API credits, but will not have access to OpenAI’s internal systems.
What Happened
OpenAI announced the OpenAI Safety Fellowship on April 6, 2026, opening a call for applications from external researchers, engineers, and practitioners to pursue independent safety and alignment research. The pilot program runs from September 14, 2026 through February 5, 2027. Applications close May 3, with successful applicants to be notified by July 25.
Fellows will be co-located at Constellation, a Berkeley-based AI safety research organization, though remote participation is also permitted. The program includes a monthly stipend, compute support, and mentorship from OpenAI researchers.
Why It Matters
The fellowship is the third safety-related program OpenAI announced in a two-week span. The company launched a Safety Bug Bounty program on March 25, 2026, and published developer guidelines for teen-facing AI experiences on March 24, 2026. Together, these initiatives represent an attempt to formalize external engagement with safety processes as OpenAI continues to deploy increasingly capable agentic systems.
External safety research fellowships are not new to the field — organizations including the Center for AI Safety and Redwood Research have run similar programs — but this marks the first such program administered directly by OpenAI with its own mentorship structure.
Technical Details
OpenAI listed seven priority research areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. The company stated it is “especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.”
Fellows are expected to produce a paper, benchmark, or dataset by the program’s end in February 2027. The announcement specifies that fellows “will not have internal system access” to OpenAI infrastructure, receiving API credits and compute resources instead. The program accepts applicants from computer science, social science, cybersecurity, privacy, and human-computer interaction, and requires letters of reference. OpenAI stated it prioritizes “research ability, technical judgment, and execution over specific credentials.”
Who’s Affected
Academic researchers and independent practitioners working on AI safety are the direct targets of the program. Constellation, which is administering applications at [email protected], will also serve as the physical workspace for in-person fellows.
Research teams and companies building on OpenAI’s platforms may be indirectly affected if fellows produce public benchmarks or datasets that establish new evaluation standards for safety properties such as robustness or misuse resistance.
What’s Next
The application window closes May 3, 2026, and OpenAI has committed to notifying accepted fellows by July 25 — leaving roughly seven weeks before the September 14 program start date. OpenAI described the initiative as a “pilot program” with no stated commitment to future cohorts beyond the current cycle.