UC Berkeley staff reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $5,555,550 over five years to UC Berkeley to support the launch of a Center for Human-Compatible Artificial Intelligence (AI), led by Professor Stuart Russell.
We believe the creation of an academic center focused on AI safety has significant potential benefits in terms of establishing AI safety research as a field and making it easier for researchers to learn about and work on this topic.
1. Background
This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. We wrote more about this cause on our blog.
Stuart Russell is Professor of Computer Science and Smith-Zadeh Professor in Engineering at the University of California, Berkeley. He is the co-author of Artificial Intelligence: A Modern Approach, which we understand to be one of the most widely-used textbooks on AI.
2. About the grant
This grant will support the establishment of the Center for Human-Compatible AI at UC Berkeley, led by Professor Russell with the following co-Principal Investigators and collaborators:
- Pieter Abbeel, Associate Professor of Computer Science, UC Berkeley
- Anca Dragan, Assistant Professor of Computer Science, UC Berkeley
- Tom Griffiths, Professor of Psychology and Cognitive Science, UC Berkeley
- Bart Selman, Professor of Computer Science, Cornell University
- Joseph Halpern, Professor of Computer Science, Cornell University
- Michael Wellman, Professor of Computer Science, University of Michigan
- Satinder Singh Baveja, Professor of Computer Science, University of Michigan
Research topics that the Center may focus on include:
- Value alignment through, e.g., inverse reinforcement learning from multiple sources (such as text and video).
- Value functions defined by partially observable and partially defined terms (e.g. “health,” “death”).
- The structure of human value systems, and the implications of computational limitations and human inconsistency.
- Conceptual questions including the properties of ideal value systems, tradeoffs among humans and long-term stability of values.
We see the creation of this center as an important component of our efforts to help build the field of AI safety for several reasons:
- We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it. Ideally this will result in a larger number of researchers ending up working on topics related to AI safety than otherwise would have.
- The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research.
- We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence.
Based on our in-progress investigation of field-building, our impression is that funding the creation of new academic centers is a very common part of successful philanthropic efforts to build new fields.
We also believe that supporting Professor Russell’s work in general is likely to be beneficial. He appears to us to be more focused on reducing potential risks of advanced artificial intelligence (particularly the specific risks we are most focused on) than any comparably senior, mainstream academic of whom we are aware. We also see him as an effective communicator with a good reputation throughout the field.
2.1 Budget and room for more funding
Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year.
Professor Russell currently has a few other sources of funding to support his own research and that of his students (all amounts are approximate):
- $340,000 from the Future of Life Institute
- $280,000 from the Defense Advanced Research Projects Agency
- $1,500,000 from the Leverhulme Trust (spread over 10 years)
Our understanding is that most of this funding is already or will soon be accounted for, and that Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. Professor Russell has also applied for a National Science Foundation Expedition grant, which would be roughly $7 million to $10 million over ten years. However, because we do not expect that decision to be made until at least a few months after the final deadline for proposals in January 2017, and because we understand those grants to be very competitive, we decided to set our level of funding without waiting for that announcement.
We are not aware of other potential funders who would consider providing substantial funding to the Center in the near future, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center.
2.2 Internal forecasts
We’re experimenting with recording explicit numerical forecasts of events related to our decisionmaking (especially grantmaking). The idea behind this is to pull out the implicit predictions that are playing a role in our decisions, and make it possible for us to look back on how well-calibrated and accurate those are. For this grant, we are recording the following forecast:
- 50% chance that, two years from now, the Center will be spending at least $2 million a year, and will be considered by one or more of our relevant technical advisors to have a reasonably good reputation in the field.
3. Our process
We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms.