Published: July 2017
MILA staff reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $2.4 million over four years to the Montreal Institute for Learning Algorithms (MILA) to support technical research on potential risks from advanced artificial intelligence (AI).
$1.6 million of this grant will support Professor Yoshua Bengio and his co-investigators at the Université de Montréal, and $800,000 will support Professors Joelle Pineau and Doina Precup at McGill University. We see Professor Bengio’s research group as one of the world’s preeminent deep learning labs and are excited to provide support for it to undertake AI safety research.
Background
This grant falls within our work on potential risks from advanced AI, one of our focus areas within global catastrophic risks. Currently, two of our primary aims in this area are (1) to increase the amount of high-quality technical AI safety research being done, and (2) to increase the number of people who are deeply knowledgeable about both machine learning and potential risks from advanced AI. We believe we can pursue these aims both directly (by supporting this type of work) and indirectly (by supporting programs that can attract talented students to this area, can provide positive examples of AI safety work that draws on machine learning expertise, and can provide leadership for the broader machine learning community).
The organization
The Montreal Institute for Learning Algorithms (MILA) is a machine learning research group based at the Université de Montréal, led by Professors Yoshua Bengio, Pascal Vincent, Christopher Pal, Aaron Courville, Laurent Charlin, Simon Lacoste-Julien, and Jian Tang. We see MILA as one of the very top deep learning labs in academia, and among the top machine learning labs. Professors Joelle Pineau, Doina Precup, Hugo Larochelle, Alain Tapp, and Jackie Cheung are associate members of MILA.
About the grant
Proposed activities
Professor Bengio has presented several potential AI safety research directions to us, along with some initial ideas about how he might work on them. However, we intend for Professor Bengio to have the flexibility to use our grant for whichever AI safety research projects may seem most promising in the future, rather than be restricted to projects that he has already proposed. In particular, we think it will be valuable for Professor Bengio’s students to be free to explore new ideas that they have and to talk to others in the AI safety community (such as Open Philanthropy’s technical advisors, or other grantees of ours) about which kinds of safety work may be most effective. Given that AI safety research is a relatively new area, we think it is particularly valuable to keep potential research options flexible.
Based on discussion with our technical advisors, some portions of Professor Bengio’s currently proposed agenda appear to us quite likely to be valuable, while we have reservations about some others (see “Risks and reservations” below). However, we expect that we would consider this grant worthwhile even if Professor Bengio were to use it to pursue exactly the projects that he has already proposed.
Case for the grant
Among potential grantees in the field, we believe that Professor Bengio is one of the best positioned to help build the talent pipeline in AI safety research. Our understanding, based on conversations with our technical advisors and our general impressions from the field, is that many of the most talented machine learning researchers spend some time in Professor Bengio’s lab before joining other universities or industry groups. This is an important contributing factor to our expectations for the impact of this grant, both because it increases our confidence in the quality of the research that this grant will support and because of the potential benefits for pipeline building.
In our conversations with Professor Bengio, we’ve found significant overlap between his perspective on AI safety and ours, and Professor Bengio was excited to be part of our overall funding activities in this area. We think that Professor Bengio is likely to serve as a valuable member of the AI safety research community, and that he will encourage his lab to be involved in that community as well. We believe that members of his lab could likely be valuable participants at future workshops on AI safety.
Budget and room for more funding
Our impression is that MILA is already fairly well-funded, and that its ability to use additional marginal funding is somewhat limited. Professor Bengio told us that the amount of additional yearly funding that he would be able to use productively for AI safety research is $400,000; we have decided to grant this full amount for four years ($1.6 million total). We have also granted two of Professor Bengio’s co-investigators at MILA who are also interested in working on this agenda, Professors Pineau and Precup, $200,000 per year ($800,000 total), which they estimated as the amount of funding they would be able to use productively.
Risks and reservations
Some of our technical advisors expressed some reservations about and offered significant feedback on Professor Bengio’s proposed research plan. We are not especially concerned about this; because AI safety is a relatively new field, we think it is reasonable to expect disagreements among researchers as to which research directions are most promising. We plan to continue having discussions with Professor Bengio and his team over the next few years in order to reach a greater degree of mutual understanding about his research agenda by the time we decide whether to renew our support in 2020.
Follow-up expectations
We expect to have a conversation with Professor Bengio six months after the start of the grant, and annually after that, to discuss his projects and results, with public notes if the conversation warrants it. In the first few months of the grant, we plan to visit Montreal for several days to meet Professor Bengio’s co-investigators and discuss the project with them.
At the conclusion of this grant in 2020, we will decide whether to renew our support. If Professor Bengio’s research is going well (based on our technical advisors’ assessment and the impressions of others in the field), and if we have achieved a better mutual understanding with Professor Bengio about how his research is likely to be valuable, it is likely that we will decide to provide renewed funding. If Professor Bengio is using half or more of our funding to pursue research directions that we do not find particularly promising, it is likely that we would choose not to renew.
Our process
We spoke with Professor Bengio and several of his students during our recent outreach to machine learning researchers and formed a positive impression of him and his work. Our technical advisors spoke highly of Professor Bengio’s capabilities, reputation, and goals.