Support AI Safety Work

Vetted researchers and organizations working to ensure AI benefits humanity through technical research, policy advocacy, and public education.

Want to suggest an addition or update? Contact us.

Sort by:Random|Donations|A-Z
Policy & Governance

Centre for the Governance of AI

Independent non-profit building a global research community dedicated to helping humanity navigate the transition to advanced AI. Advises policymakers and supports informed decision-making by private and public institutions.

Visit website →Sign In to Add Donation
Technical Research

EleutherAI

Decentralized collective of volunteer researchers democratizing AI safety research through open-source models and datasets. Their philosophy holds that transparency and broad access enable better safety research than closed development.

Visit website →Sign In to Add Donation
Technical Research

Machine Intelligence Research Institute

Pioneering non-profit focused on the foundational mathematical challenges of aligning superintelligent AI. After two decades of research, MIRI now advocates for a global moratorium on advanced AI development to allow safety research to catch up.

Visit website →Sign In to Add Donation
Public Education

Doom Debates

Educational initiative by Liron Shapira fostering critical discussion and debate about existential risks from advanced AI through public discourse and analysis.

1 donation
$10 donated
Visit website →Sign In to Add Donation
Public Education

AXRP - AI X-risk Research Podcast

In-depth podcast by Daniel Filan featuring long-form conversations with leading AI safety researchers. Serves as an accessible archive of the current state of technical alignment research.

Visit website →Sign In to Add Donation
Technical Research

Alignment Research Center

Non-profit developing practical, scalable alignment techniques that could be adopted by AI labs today while remaining robust for future systems. Contracted by OpenAI to conduct independent safety evaluations of GPT-4.

Visit website →Sign In to Add Donation
Technical Research

Center for Human-Compatible AI

UC Berkeley research center developing the conceptual and technical foundations for provably beneficial AI systems that are uncertain about human preferences and defer to them.

1 donation
$50 donated
Visit website →Sign In to Add Donation
Technical Research

FAR AI

Research non-profit advancing the science and engineering of AI alignment by incubating and accelerating alignment research projects within a collaborative research community.

Visit website →Sign In to Add Donation
Policy & Governance

Centre for Long-Term Resilience

UK think tank working with government to improve societal resilience to extreme risks, with core focus areas in AI risk, biological risk, and government risk management capacity.

Visit website →Sign In to Add Donation
Field Building

Berkeley Existential Risk Initiative

Public charity accelerating university-based existential risk research by providing operational support, removing administrative friction so researchers can focus on their work. Collaborates with groups like CHAI at UC Berkeley and CSER at Cambridge.

Visit website →Sign In to Add Donation
Policy & Advocacy

Future of Life Institute

Non-profit working to reduce existential risks from transformative technologies, particularly AI. Organizes high-profile policy initiatives and public communications, including the influential "Pause Giant AI Experiments" open letter.

Visit website →Sign In to Add Donation
Technical Research

Center for AI Safety

Research non-profit working to reduce societal-scale risks from AI through technical research, standards development, and public education. Known for organizing the influential statement on AI risk signed by leading researchers and industry leaders.

Visit website →Sign In to Add Donation
Public Education

Robert Miles

Independent educator creating accessible YouTube videos explaining complex AI safety concepts to general audiences. His work is crucial for bringing new people into the field and building public understanding.

Visit website →Sign In to Add Donation
Policy & Advocacy

PauseAI

Grassroots advocacy organization calling for an international treaty to pause frontier AI development until safety can be guaranteed. Organizes protests, lobbying events, and public education campaigns.

Visit website →Sign In to Add Donation
Grantmaking

Long-Term Future Fund

EA Funds-managed fund making grants to improve humanity's long-term prospects, with significant funding directed towards AI existential risk mitigation.

Visit website →Sign In to Add Donation