Conferences, workshops, fellowships, mixers, and CFPs in the AI safety, alignment, and
governance community. Refreshed weekly. Sorted by salience.
conference
★ 1.00 CFP closes May 1, 2026
A free, one-day workshop on technical AI safety organized by the Oxford Martin AI Governance Initiative and Noeon Research. TAIS welcomes attendees from all backgrounds regardless of prior research experience, with participants from academia, industry, and policy. Third iteration following TAIS 2024 and 2025.
#alignment#governance#safety-research#evals conferencetechnicalOxfordfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8285714285714285,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_cfp_open": 1,
"source_count": 1
} workshop
★ 1.00 CFP closes May 8, 2026
Third annual mechanistic interpretability workshop at ICML focusing on developing principled methods to analyze and understand neural network internals. The workshop addresses the challenge of understanding model decision-making mechanisms to better predict behavior and detect potential issues. Organized by researchers from Google DeepMind, Harvard, Northeastern, Imperial College London, Oxford, and other leading institutions.
#interpretability#alignment ICMLworkshopmechanistic-interpretability
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7433962264150944,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_cfp_open": 1,
"source_count": 1
} workshop
★ 1.00 CFP closes May 3, 2026
Workshop at ICML 2026 exploring pluralistic AI alignment with diversity of human values. Addresses how to integrate diverse perspectives into AI alignment frameworks, examining multi-objective approaches and human-AI interaction workflows across communities. Welcomes interdisciplinary research from machine learning, HCI, philosophy, social sciences, and policy studies.
#alignment#governance ICMLworkshopalignmentpluralistic
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.7383647798742139,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_cfp_open": 1,
"source_count": 1
} A 4-month, full-time paid fellowship developing the next generation of AI researchers and engineers. Fellows receive weekly stipends (3,850 USD / 2,310 GBP / 4,300 CAD), funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Focus areas include scalable oversight, adversarial research, AI security, machine learning engineering, and societal impact. Over 40% of fellows subsequently join Anthropic full-time.
#alignment#interpretability#control#evals fellowshipresearchAnthropic
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.6930817610062893,
"community_signal": 0.8,
"speaker_org_signal": 0.95,
"is_cfp_open": 1,
"source_count": 1
} Large-scale summit hosted by Berkeley's Center for Responsible, Decentralized Intelligence covering the full technological stack of agentic AI including foundation models, agent frameworks, evaluation methods, infrastructure, and deployment. Explicitly addresses critical safety and security challenges with a goal to advance responsible and secure agentic AI systems. Expected 5,000+ in-person attendees plus global livestream.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.6327044025157232,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_cfp_open": 1,
"source_count": 1
} A 5-day unconference bringing together 100+ researchers exploring mathematical approaches to AI alignment including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free to attend with limited financial support for travel and accommodation. Organized by Iliad, an umbrella organization for applied mathematics research in alignment.
#alignment#theory#interpretability conferenceunconferencetheoreticalfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6226415094339622,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_cfp_open": 1,
"source_count": 1
} OpenAI's external research fellowship program in partnership with Constellation for rigorous, high-impact research on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety, agentic oversight, and misuse domains. Fellows receive mentorship, monthly stipends, compute support, and API credits. Open to diverse backgrounds including computer science, social science, cybersecurity, privacy, and HCI.
#alignment#safety-research#governance#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.41132075471698115,
"community_signal": 0.75,
"speaker_org_signal": 0.95,
"is_cfp_open": 1,
"source_count": 1
} Constellation's flagship Astra Fellowship is a fully-funded, in-person research program offering empirical and strategy/governance tracks. Fellows receive $8,400 monthly stipends, ~$15k/month research budgets, weekly mentorship from senior experts, and placement support. Over 80% of the first cohort now work in AI safety at organizations like Redwood Research, METR, Anthropic, OpenAI, and Google DeepMind.
#alignment#control#evals#governance#safety-research fellowshipempiricalgovernancestrategy
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.41132075471698115,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_cfp_open": 1,
"source_count": 1
} UNIDIR's global conference bringing together diplomats, policymakers, military experts, industry leaders, academia, civil society, and research labs to discuss AI security and ethics. Part of UNIDIR's broader work on international AI policy, disarmament, and governance - directly relevant to the AI safety community's governance interests.
#governance#policy UNinternationalgovernancehybrid
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.8540880503144654,
"community_signal": 0.6,
"speaker_org_signal": 0.8,
"is_cfp_open": 1,
"source_count": 1
} A weekend AI safety hackathon hosted by Apart Research, focused on bringing AI safety research opportunities to the Global South. Part of Apart Research's monthly sprint and hackathon series engaging 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.8490566037735849,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_cfp_open": 1,
"source_count": 1
} A 12-week research fellowship connecting emerging researchers with leading mentors in AI alignment, interpretability, and safety. Fellows receive $15k stipends, $12k compute budgets, workspace in Berkeley or London, housing, catered meals, and educational seminars. Applications are closed but expressions of interest are still being collected. Top performers may extend for an additional 6-12 months.
#alignment#interpretability#governance#theory fellowshipmentorshipMATSresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.939622641509434,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_cfp_open": 0,
"source_count": 1
} Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.65,
"time_proximity": 0.919496855345912,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_cfp_open": 1,
"source_count": 1
} The inaugural International Programme on AI Evaluation is a 150-hour academic programme combining online lectures, hands-on courses, and an in-person capstone week in Valencia. With 40 fully-funded scholarships, it addresses the critical shortage of experts in AI evaluation needed by AI Safety Institutes and leading labs worldwide. Participants receive a 15 ECTS Expert Diploma.
#evals#safety-research#governance fellowshipevalsacademichybrid
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0,
"community_signal": 0.6,
"speaker_org_signal": 0.85,
"is_cfp_open": 1,
"source_count": 1
} ARENA (Alignment Research Engineering Accelerator) is an intensive in-person program teaching participants to engineer and implement AI alignment research. Applications are currently closed, but the program covers all major costs including travel, visa, accommodation, and meals. Requires Python proficiency, mathematical knowledge, and genuine commitment to AI safety.
#alignment#interpretability bootcamptechnicalintensive
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.9748427672955975,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_cfp_open": 0,
"source_count": 1
} Intensive 9-week fully-funded research program for 30 fellows advancing careers in AI safety. Focus areas include alignment, interpretability, formal verification, multi-agent safety, AI governance, technical governance (compute governance, model evals, standards), and economics of transformative AI. Fellows receive $10k stipend, housing, up to $10k in compute credits/GPU access, free weekday meals, weekly mentorship from Harvard/MIT/Northeastern researchers, and networking opportunities.
#alignment#interpretability#governance#evals fellowshipresearchCambridgeHarvard
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_cfp_open": 0,
"source_count": 1
} 3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_cfp_open": 0,
"source_count": 1
} 3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_cfp_open": 0,
"source_count": 1
} A technical workshop hosted by Foresight Institute focused on artificial intelligence security and sovereignty issues. Part of Foresight's broader initiative supporting AI safety research, including their new AI Nodes in San Francisco and Berlin offering project funding, office space, and compute resources.
#governance#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.7031446540880504,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_cfp_open": 1,
"source_count": 1
} Three-month fully-funded research fellowship for scholars in economics, law, international relations, and related fields focusing on societal impacts of advanced AI and institutions/policies for effective response. Fellows receive $25,000 stipend, covered travel, daily meals, and access to CAIS expertise and Bay Area network. Emphasizes producing publicly shareable research on AI's impact on economic distribution, corporate accountability, and geopolitical competition.
#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9345911949685535,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_cfp_open": 0,
"source_count": 1
} A 4-month, full-time paid fellowship developing the next generation of AI researchers and engineers. Fellows receive weekly stipends (3,850 USD / 2,310 GBP / 4,300 CAD), funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Focus areas include scalable oversight, adversarial research, AI security, machine learning engineering, and societal impact. Over 40% of fellows subsequently join Anthropic full-time.
#alignment#interpretability#control#evals fellowshipresearchAnthropic
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.4571428571428572,
"community_signal": 0.8,
"speaker_org_signal": 0.95,
"is_cfp_open": 0,
"source_count": 1
} Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_cfp_open": 0,
"source_count": 1
} A workshop organized by FAR.AI bringing together global leaders to explore mitigation strategies for AGI risks. Part of FAR.AI's series of technical workshops and conferences focused on AI safety verification and control.
#alignment#governance#control
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.7635220125786164,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_cfp_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_cfp_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_cfp_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_cfp_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_cfp_open": 0,
"source_count": 1
}