Future of Life Institute staff reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support.
FLI is a research and outreach organization that works to mitigate global catastrophic risks (GCRs). We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence.
FLI is now seeking general operating support for the coming year. We have been impressed with FLI’s past work and are glad to support future efforts, especially since they may generate more opportunities for good work in this area. We do have some reservations about FLI’s current plans, discussed below.
Rationale for the grant
Background
The Open Philanthropy Project has identified global catastrophic risks (GCRs) as one of the categories that we plan to prioritize in our grantmaking.
We have previously worked with the Future of Life Institute (FLI), a research and outreach organization that works to mitigate GCRs, on potential risks from advanced artificial intelligence (AI), one of our focus areas in this category. Last year, we worked with FLI to evaluate responses to a Request for Proposals (RFP) it issued, and made a grant of $1,186,000 to increase the number of high-quality proposals FLI was able to fund.
Grant details
The major activities FLI has planned for 2016 (for which it also plans to do additional fundraising) include:
- News operation: FLI recently hired a staffer dedicated to curating and writing news content related to GCRs for the recently added news section of its website. It will require approximately $150,000 to support its two-person communications staff for one year.
- Nuclear weapons campaign: FLI plans to launch and run a campaign to encourage individuals and organizations (e.g. universities and municipalities) not to invest in the production of new nuclear weapons systems. This campaign is estimated to cost approximately $100,000 over the next year, including about $50,000 for financial research to identify companies investing in nuclear weapons, $45,000 for several part-time on-site university student organizers, and $5,000 for incidental expenses.
- AI safety conference: In 2015, FLI organized a conference on AI safety, held in Puerto Rico. It plans to host another in 2016, which it estimates will cost at least $150,000. FLI told us that it expects to be able to raise the required funding for this conference from other sources.
- AI conference travel: FLI will support travel expenses related to any symposia, panels, and/or discussions that it helps organize on AI safety, and support FLI-affiliated researchers to travel to several major machine learning conferences this year. FLI plans to spend approximately $20,000 on this.
The case for the grant
In organizing its 2015 AI safety conference (which we attended), FLI demonstrated a combination of network, ability to execute, and values that impressed us. We felt that the conference was well-organized, attracted the attention of high-profile individuals who had not previously demonstrated an interest in AI safety, and seemed to lead many of those individuals to take the issue more seriously. An open letter issued following the conference, calling for “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial”, was signed by a number of prominent figures in machine learning and the broader scientific community.1 The conference also allowed FLI to mobilize private funding2, which it used to launch an RFP that resulted in an unexpectedly high number of strong proposals.
Although we have some reservations about FLI’s plans for this year, we believe that they have the potential to be successful, which could create opportunity for further good work related to GCRs. Details follow.
News operations
We feel that media discussion of potential risks from advanced artificial intelligence is often unclear and poorly informed. Having a news operation dedicated to improving the quality of the coverage on this issue could be helpful. FLI plans to have its high-profile advisory board available for comment, and we can imagine a number of scenarios in which this may improve the quality of discussion on these issues.
AI safety
We believe FLI’s plans related to AI conferences (both its own and others) are worthwhile efforts to continue to allow AI researchers to have more reasonable conversations about the future of AI and possible risks. This looks to us like a positive development for the field.
Nuclear weapons campaign
We are most uncertain about FLI’s proposed nuclear weapons campaign, for reasons stated in a later section, but we do see a case for it. We believe that nuclear weapons advocacy is a neglected space in need of new voices and that FLI is well positioned to work on a university divestment campaign. The organization has strong ties to many prominent academics, as well as existing relationships with campus-based effective altruism groups who might welcome the opportunity to do concrete work in this space.
If this type of advocacy can achieve small wins on nuclear weapons, we think this might better position FLI to do more impactful larger-scale advocacy work on this issue in the future.
Regardless of its success, this work will help us better understand whether FLI can successfully execute on issues related to nuclear weapons policy. If this campaign goes well, we may feel more comfortable supporting FLI on more ambitious efforts in this space going forward.
Concerns about the grant
Although we have been impressed with FLI’s capacity to organize and execute, we have some concern that its capacity to effect change may be reduced to some extent outside of issues related to AI. FLI was able to bring attention and credibility to potential risks from AI, but it is not clear that this is necessarily what is needed on other topics.
We have some reservations about FLI’s planned news operations, because the public content FLI has put out so far does not appear to us to be highly likely to contribute to improved press coverage, though our impression could be wrong and FLI’s work in this category is fairly early.
We also have reservations about FLI’s approach to its nuclear weapons campaign, which we believe is unlikely to lead to significant change on this issue. The theory of change implied by FLI’s plans for this campaign seems to be that increasing the stigma attached to nuclear weapons would push decision-makers toward policies that call for fewer nuclear weapons. We would guess that there is likely to be only a weak link between success in the divestment campaign and broader attitudes toward nuclear weapons policy. Note that we have done some work to understand the space of nuclear weapons policy.
In addition, we are somewhat concerned that if FLI does achieve success on this issue, it may be challenging to recruit staff that would be needed to transform its efforts into a broader and sustained campaign.
Room for more funding
In the absence of our funding, we believe it is fairly likely (but still uncertain) that FLI would be able to raise most or all of the funds it requires from other donors. We expect that these donors would largely have similar values and priorities to us (e.g. donors from the effective altruism community), and are therefore not overly concerned by this possibility. With this grant, we expect FLI to be highly likely to raise the funds it requires.
Plans for learning and follow-up
Goals for the grant
This grant will support an organization we believe has done good work in the past and allow it to expand its work. We hope that the grant will help us learn more about FLI’s capacity to do good work beyond potential risks from advanced artificial intelligence.
Key questions for follow-up
We expect to have a conversation with FLI staff every 3-6 months for the next 12 months. After that, we plan to consider renewal. Although we recognize that not all of FLI’s planned activities may have come to fruition within 12 months, we believe that we will be able to get a good sense of how they have gone so far. Questions we might seek to answer include:
- Is the coverage of GCRs on the news page (both original and curated) of high quality?
- Is FLI a recognized source of information on GCRs?
- Has the nuclear weapons campaign received media coverage?
- Have any universities or other investors demonstrated increased interest in the issue of nuclear weapons, or shown any indication that they are considering divestment?
- Has the presence of AI researchers affiliated with FLI at major machine learning conferences had an impact on the nature of the discussions at these conferences?
Our process
Following our collaboration last year, we kept in touch with FLI regarding its funding situation and plans for future activities.
Sources
DOCUMENT | SOURCE |
---|---|
FLI Open Letter | Source (archive) |
FLI press release, Jan 15 2015 | Source (archive) |