The Open Philanthropy Project recommended a grant of $3,750,000 over three years to the Machine Intelligence Research Institute (MIRI) for general support. MIRI plans to use these funds for ongoing research and activities related to reducing potential risks from advanced artificial intelligence, one of our focus areas.
This grant represents a renewal of and increase to our $500,000 grant recommendation to MIRI in 2016, which we made despite strong reservations about their research agenda, detailed here. In short, we saw value in MIRI’s work but decided not to recommend a larger grant at that time because we were unconvinced of the value of MIRI’s research approach to AI safety relative to other research directions, and also had difficulty evaluating the technical quality of their research output. Additionally, we felt a large grant might signal a stronger endorsement from us than was warranted at the time, particularly as we had not yet made many grants in this area.
Our decision to renew and increase MIRI’s funding sooner than expected was largely the result of the following:
- We received a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of our close advisors, and (iii) is generally regarded as outstanding by the ML community. As mentioned above, we previously had difficulty evaluating the technical quality of MIRI’s research, and we previously could find no one meeting criteria (i) – (iii) to a comparable extent who was comparably excited about MIRI’s technical research. While we would not generally offer a comparable grant to any lab on the basis of this consideration alone, we consider this a significant update in the context of the original case for the grant (especially MIRI’s thoughtfulness on this set of issues, value alignment with us, distinctive perspectives, and history of work in this area). While the balance of our technical advisors’ opinions and arguments still leaves us skeptical of the value of MIRI’s research, the case for the statement “MIRI’s research has a nontrivial chance of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)” appears much more robust than it did before we received this review.
- In the time since our initial grant to MIRI, we have recommended several more grants within this focus area, and are therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach.
We are now aiming to support about half of MIRI’s annual budget. MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff.