Published: March 2017
We decided to write about this grant largely because of its size and unusual structure, which initiates a partnership between Open Philanthropy and OpenAI. This page is a summary of the reasoning behind our decision to recommend the grant; it was reviewed but not written by the grant investigator.
OpenAI staff reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $30 million ($10 million per year for 3 years) in general support to OpenAI. This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, “Holden” throughout this page) will join OpenAI’s Board of Directors and, jointly with one other Board member, oversee OpenAI’s safety and governance work.
This grant is not an investment. Open Philanthropy does not have equity in OpenAI.
We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.
1. Background
This grant falls within our work on potential risks from advanced artificial intelligence (AI), one of our focus areas within global catastrophic risks. We have written in detail about the case we see for funding work related to AI on our blog. As we wrote in that post, we see AI and machine learning research as being on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. Broadly, we expect the results of continuing progress in AI research to be positive for society at large, but we see some risks (both from unintended consequences of AI use, and from deliberate misuse), and believe that we – as a philanthropic organization, separate from academia, industry, and government – may be well-placed to support work to reduce those risks.
1.1 The organization
OpenAI is a non-profit, founded in 2015. Its mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. Currently, most of its work is on advancing the state of the art in technical AI research.
OpenAI’s leadership appears to be highly value-aligned with us; this includes their focus on the societal impacts of AI and their concern about potential risks from advanced AI.
When OpenAI launched, it characterized the nature of the risks – and the most appropriate strategies for reducing them – in a way that we disagreed with. In particular, it emphasized the importance of distributing AI broadly;1 our current view is that this may turn out to be a promising strategy for reducing potential risks, but that the opposite may also turn out to be true (for example, if it ends up being important for institutions to keep some major breakthroughs secure to prevent misuse and/or to prevent accidents). Since then, OpenAI has put out more recent content consistent with the latter view,2 and we are no longer aware of any clear disagreements. However, it does seem that our starting assumptions and biases on this topic are likely to be different from those of OpenAI’s leadership, and we won’t be surprised if there are disagreements in the future.
Given our shared values, different starting assumptions and biases, and mutual uncertainty, we expect partnering with OpenAI to provide a strong opportunity for productive discussion about the best paths to reducing AI risk. We also think the fact that OpenAI and Open Philanthropy are both located in San Francisco, and that some informal relationships already exist between Open Philanthropy and OpenAI personnel, will contribute to productive communication between our organizations.
2. Case for the grant
Given our prioritization of reducing potential risks from advanced AI, we are interested in opportunities to become closely involved with any of the small number of existing organizations in “industry” (i.e. not in academia or government) that (a) are explicitly working toward the development of transformative AI; (b) are advancing the state of the art in AI research; (c) employ top AI research talent. We think it is likely that such organizations will be essential for:
- Creating an environment in which people can effectively do technical research on “AI safety” (i.e. work toward reducing potential risks from advanced AI). This is especially the case if groups in industry become more important, relative to groups in academia, as AI progress continues.
- Raising the general profile and legitimacy of work to reduce AI risk.
- Helping to shape discussions about policy and strategic considerations if and when it appears that transformative AI could be developed soon.
We see OpenAI and DeepMind as the two organizations currently best fitting the above description (based in large part on the views of our technical advisors). (We think it is possible that Vicarious may also fit this description, but do not feel we have enough information to be confident; in particular, Vicarious makes public substantially less information about its research and its progress than either OpenAI or DeepMind.)
We believe that close involvement with such an organization is likely one of the most effective avenues available to us for advancing our goal of increasing the amount of work done on reducing potential risks from advanced AI, both within the organization and outside it (the latter via increased legitimacy for safety-focused work and improved career options for leading thinkers interested in pursuing AI research that’s focused on safety). We also expect such a partnership to:
- Improve our understanding of the field of AI research, and give us a better sense of which interventions are most likely to be effective for reducing risks.
- Improve our ability to generically achieve our goals regarding technical AI safety research, particularly by helping us form relationships with top researchers in the field.
- Better position us to generally promote the ideas and goals that we prioritize within AI.
As stated in the previous section, we think a partnership with OpenAI is particularly appealing due to our shared values, different starting assumptions and biases, and potential for productive communication.
As our views and OpenAI’s views evolve (regarding the most promising paths to reducing potential risks from advanced AI), it is likely that both we and OpenAI will be putting out further public content to share our thinking.
2.1 Details on Open Philanthropy’s role
Holden plans to visit OpenAI roughly once a week for all-hands and other meetings. Preliminarily, he expects to generally be an advocate for:
- Encouraging work on alignment research, to the extent that there is promising work to be done.
- Intensive analysis of potential future policy challenges with respect to AI (we expect to publish more on this topic in the future).
- Focusing on high-quality basic research.
We also believe we are well-positioned to help improve connections between OpenAI and groups in the risk-focused AI community, such as the Future of Humanity Institute (FHI) (which is also an Open Philanthropy grantee).
2.2 A note on why this grant is larger than others we’ve recommended in this focus area
We expect that some readers will be surprised by the size of this grant, and wonder why it is so much larger than grants we’ve recommended to other groups working on potential risks from advanced AI. We think it would be prohibitively difficult to communicate our full thinking on this matter, but a few notes:
- As we’ve written previously, we consider this cause to be an outstanding one, and we are generally willing to invest a lot for a relatively small chance of very large impact.
- We think this cause will be disproportionately important if it turns out that transformative AI is developed sooner than expected (within 20 years, or perhaps even sooner). And if that happens, we think it’s fairly likely that OpenAI will be an extraordinarily important organization, with far more influence over how things play out than organizations that focus exclusively on risk reduction and do not advance the state of the art.
- We generally feel it is very hard to make predictions about, and plans for, 10+ years from now. We think that working closely with OpenAI will put us in much better position to understand and react to a wide variety of potential situations, and that this is much more likely to result in the kind of impact we’re looking for than supporting any particular line of research (or other intervention targeting a specific scenario and specific risk) today.
- In fact, much of our other work in this cause aims primarily to help build a general field and culture that can react to a wide variety of potential future situations, and prioritizes this goal above supporting any particular line of research. We think that OpenAI’s importance to the field of AI as a whole makes this partnership an excellent opportunity for that goal as well.
- We are often hesitant to provide too high a proportion of a given organization’s funding, for a number of reasons.3 OpenAI has significant sources of revenue other than Open Philanthropy, and we are comfortable with the overall proportion of funding we are providing. (This is less true of other grantees in this space to date.)
3. Plans for learning and follow-up
Key questions for follow-up will include:
- Has our partnership resulted in concrete differences in OpenAI’s activities and/or changes in our thinking?
- Does OpenAI still seem to be one of the few leading AI research groups, in terms of talent and likelihood of making progress toward transformative AI?
- How well-aligned are we with OpenAI’s leadership on key issues relating to potential risks from advanced AI?
We plan to do informal reviews each year. We currently plan to do a more in-depth review to consider further renewal at the end of this three-year term. The key questions for renewal will be whether OpenAI appears to be a significant positive force for reducing potential risks from advanced AI, and/or whether our involvement is tangibly helping OpenAI move towards becoming a positive force for AI safety.
4. Our process
OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.
5. Relationship disclosures
OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
6. Sources
DOCUMENT | SOURCE |
---|---|
OpenAI, “Introducing OpenAI” | Source (archive) |
OpenAI, “Mission” | Source (archive) |
Slate Star Codex, “Should AI Be Open?” | Source (archive) |