Open Philanthropy recommended a grant of $189,350 to the Supervised Program for Alignment Research to support a program matching aspiring researchers with mentors for AI safety research projects.
This falls within our focus area of Global Catastrophic Risks Capacity Building.