โŠš AI Safety Tracker

โ† All upcoming events

OpenAI Safety Fellowship 2026

fellowship ๐Ÿ“… September 14, 2026 โ€“ February 5, 2027 ๐Ÿ“ Berkeley, USA โ˜… 1.00

https://alignment.openai.com/safety-fellowship/

OpenAI's external research fellowship for rigorous, high-impact research on safety and alignment of advanced AI systems. Fellows work on priority areas including safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Program includes monthly stipend, compute support, API credits, and ongoing mentorship. Fellows work in Berkeley at Constellation or remotely, partnering with OpenAI mentors.

Deadlines

Topics & tags

#alignment#safety-research#governance

Sources

Salience signals (score = 1.000)
{
  "score": 1,
  "signals": {
    "type_weight": 0.8,
    "source_trust": 0.9,
    "topic_relevance": 0.95,
    "time_proximity": 0.40628930817610065,
    "community_signal": 0.8,
    "speaker_org_signal": 0.95,
    "is_cfp_open": 1,
    "source_count": 1
  },
  "computed_at": "2026-04-28T16:35:50.810Z"
}

First seen 2026-04-28 ยท last confirmed 2026-04-28 ยท confidence high