Support AI Safety Work
Vetted researchers and organizations working to ensure AI benefits humanity through technical research, policy advocacy, and public education.
Want to suggest an addition or update? Contact us.
Centre for the Governance of AI
Independent non-profit building a global research community dedicated to helping humanity navigate the transition to advanced AI. Advises policymakers and supports informed decision-making by private and public institutions.
EleutherAI
Decentralized collective of volunteer researchers democratizing AI safety research through open-source models and datasets. Their philosophy holds that transparency and broad access enable better safety research than closed development.
Machine Intelligence Research Institute
Pioneering non-profit focused on the foundational mathematical challenges of aligning superintelligent AI. After two decades of research, MIRI now advocates for a global moratorium on advanced AI development to allow safety research to catch up.
Doom Debates
Educational initiative by Liron Shapira fostering critical discussion and debate about existential risks from advanced AI through public discourse and analysis.
AXRP - AI X-risk Research Podcast
In-depth podcast by Daniel Filan featuring long-form conversations with leading AI safety researchers. Serves as an accessible archive of the current state of technical alignment research.
Alignment Research Center
Non-profit developing practical, scalable alignment techniques that could be adopted by AI labs today while remaining robust for future systems. Contracted by OpenAI to conduct independent safety evaluations of GPT-4.
Center for Human-Compatible AI
UC Berkeley research center developing the conceptual and technical foundations for provably beneficial AI systems that are uncertain about human preferences and defer to them.
FAR AI
Research non-profit advancing the science and engineering of AI alignment by incubating and accelerating alignment research projects within a collaborative research community.
Centre for Long-Term Resilience
UK think tank working with government to improve societal resilience to extreme risks, with core focus areas in AI risk, biological risk, and government risk management capacity.
Berkeley Existential Risk Initiative
Public charity accelerating university-based existential risk research by providing operational support, removing administrative friction so researchers can focus on their work. Collaborates with groups like CHAI at UC Berkeley and CSER at Cambridge.
Future of Life Institute
Non-profit working to reduce existential risks from transformative technologies, particularly AI. Organizes high-profile policy initiatives and public communications, including the influential "Pause Giant AI Experiments" open letter.
Center for AI Safety
Research non-profit working to reduce societal-scale risks from AI through technical research, standards development, and public education. Known for organizing the influential statement on AI risk signed by leading researchers and industry leaders.
Robert Miles
Independent educator creating accessible YouTube videos explaining complex AI safety concepts to general audiences. His work is crucial for bringing new people into the field and building public understanding.
PauseAI
Grassroots advocacy organization calling for an international treaty to pause frontier AI development until safety can be guaranteed. Organizes protests, lobbying events, and public education campaigns.
Long-Term Future Fund
EA Funds-managed fund making grants to improve humanity's long-term prospects, with significant funding directed towards AI existential risk mitigation.