OpenAI Safety Fellowship 2026
https://alignment.openai.com/safety-fellowship/
OpenAI's external research fellowship for rigorous, high-impact research on safety and alignment of advanced AI systems. Fellows work on priority areas including safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Program includes monthly stipend, compute support, API credits, and ongoing mentorship. Fellows work in Berkeley at Constellation or remotely, partnering with OpenAI mentors.
Deadlines
- application: May 3, 2026 โ Application deadline, notifications by July 25
Topics & tags
Sources
- OpenAI Safety Fellowship (org, trust 0.90) โ observed 2026-04-28
Salience signals (score = 1.000)
{
"score": 1,
"signals": {
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.40628930817610065,
"community_signal": 0.8,
"speaker_org_signal": 0.95,
"is_cfp_open": 1,
"source_count": 1
},
"computed_at": "2026-04-28T16:35:50.810Z"
}