Update, 10/16/24: We are thrilled by how many strong applications this RFP has received. Unfortunately, the number of applications has overwhelmed our staff capacity to review them. Rather than close the RFP, we’ve decided to raise the bar for which EOIs and full proposals we invite forward for further investigation. As a result, some EOIs and proposals that we would have invited forward previously will now be rejected due to capacity constraints.
AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and other) experts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic harm. For example, this could happen if malicious human actors deliberately misuse advanced AI systems, or if we lose control of future powerful systems designed to take autonomous actions.[1]Though the nature and extent of these risks are still an ongoing area of scientific inquiry and debate, we think it is plausible that rapid progress in AI could lead to extreme risks, including (but not limited to): The permanent disempowerment or extinction of humanity by misaligned AI … Continue reading
To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below.
Strong applications might be funded by Good Ventures (Open Philanthropy’s partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2]Each of these philanthropists gives far less to this issue each year than Open Philanthropy, and they are typically only interested in a narrow subset of opportunities we recommend. Also, we are typically not their only philanthropic advisors. Currently, these other philanthropists (in aggregate) … Continue reading (You can indicate in your application whether we have permission to share your materials with other potential funders.)
Click here to submit an EOI
As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page.
Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3]Our ability to make awards may be limited by law or policy. Our grant awards are conditional on completing legal due diligence prior to payment. We are open to making restricted grants to projects housed within for-profit companies, though we expect this to be somewhat rare, for logistical reasons. … Continue reading We will evaluate EOIs on a rolling basis. See below for more details.
If you have any questions, please email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this short feedback form.
1. Eligible proposal subject areas
We are primarily seeking EOIs in the following subject areas, but will consider exceptional proposals outside of these areas, as long as they are relevant to mitigating catastrophic risks from AI:
- Technical AI governance: Developing and vetting technical mechanisms that improve the efficacy or feasibility of AI governance interventions, or answering technical questions that can inform governance decisions. Examples include compute governance, model evaluations, technical safety and security standards for AI developers, cybersecurity for model weights, and privacy-preserving transparency mechanisms.
- Policy development: Developing and vetting government policy proposals in enough detail that they can be debated and implemented by policymakers. Examples of policies that seem like they might be valuable (but which typically need more development and debate) include some of those mentioned e.g. here, here, and here.
- Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling “red lines” and “if-then commitments,” incident reporting protocols, and third-party audits. See e.g. here, here, and here.
- International AI governance: Developing and vetting paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors. See e.g. here.
- Law: Developing and vetting legal frameworks for AI governance, exploring relevant legal issues such as liability and antitrust, identifying concrete legal tools for implementing high-level AI governance solutions, encouraging sound legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals. See e.g. here.
- Strategic analysis and threat modeling: Improving society’s understanding of the strategic landscape around transformative AI through research into potential threat models, takeoff speeds and timelines, reference class forecasting, and other approaches to gain clarity on key strategic considerations. See e.g. here and here.
Please keep in mind that while there are a wide range of projects that could hypothetically fit into each of these subject areas and might improve outcomes from increasingly wide societal adoption of AI systems, we are focused on work that could help characterize or mitigate potential catastrophic risks from AI of the sort described above. Familiarity with the broad perspective that underpins our grantmaking in AI will likely improve applicants’ odds of success in this RFP; however, we expect that many strong applicants will hold views that differ significantly from those of most OP staff who work on AI (just as our staff disagree with each other on many points!). See footnote for potential readings.[4]See the following non-exhaustive list of suggested readings: Most Important Century series Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover Literature Review of Transformative Artificial Intelligence Timelines The Alignment Problem from a Deep … Continue reading
1.1 Ineligible subject areas
This program is currently not seeking EOIs on the following subject areas:
- AI governance work that doesn’t have likely major relevance to global catastrophic risks from AI. In other terms, we’re seeking proposals to address AI impacts on the scale of “transformative AI” as defined here.
- EOIs that are a better fit for one of our other active AI-related RFPs, listed here.
2. Eligible proposal types
You may submit an Expression of Interest (EOI) for work on one or more of the eligible subject areas (see above) of the following proposal types: research projects, training or mentorship programs, general support, or other projects.
2.1 Research Projects
We’re interested in funding research that can:
- Improve the field’s strategic clarity on the above subject areas.
- Improve the scientific understanding of AI’s developmental trajectory, thereby improving our ability to prepare for the arrival of advanced AI systems.
- But if your proposal seems to be a fit for these more narrowly-scoped research RFPs, please apply there instead.
- Flesh out the details, pros, and cons of various policy options and governance mechanisms (and strategies for their implementation).
- Help distill, review, and translate existing research in the above areas to make it more accessible and actionable.
This research can include technical projects, policy reports, legal analysis, and/or more traditional academic work.
2.1.1 Examples of work these types of grants could fund
- Compute Trends Across Three Eras of Machine Learning
- Model Evaluations for Extreme Risks
- Frontier AI Regulation: Managing Emerging Risks to Public Safety
- The Semiconductor Supply Chain: Assessing National Competitiveness
- Secure, Governable Chips
- Emergent Abilities in Large Language Models: An Explainer
- Tort Law and Frontier AI Governance
We think that the range of research projects we’d be interested in supporting is wide, so any list of only a few examples cannot possibly cover it. To that end, we’ve put together a longer — though still certainly incomplete — list of valuable research contributions here, many of which could be usefully extended or challenged by future work.
2.1.2 What are we looking for?
We expect that projects we choose to support will have many, but not necessarily all, of the following features:
- The project aims to address an open question that is relevant for reducing the likelihood of global catastrophe from transformative AI systems, related to the subject areas outlined above.
- This might mean examining the strength of the evidence about, specifying and eliciting forecasts on, or collecting and presenting expert opinions on, an important claim about risks we might face or solutions to them.
- It could mean taking a current policy proposal and proposing an alternate implementation, laying out necessary prerequisites and how to achieve them, or arguing that implementation would be unfeasible or unwise.
- It could mean taking a popular high-level idea and fleshing out the details of it, or laying out the specifics of a research agenda that would be required to do so.
- It could mean examining the connections between AI governance and other areas, for example via historical case studies of past efforts to manage dangerous or fast-moving technology, or via explicit comparison to modern high reliability organizations.
- The scope of the project is realistic, and clearly outlined in the proposal.
- Although the nature of research will vary substantially, this will often look like having a specific audience in mind for the final output, including potentially a target venue for publication.
- It will also usually involve a clear picture of which questions the project hopes to develop answers to, a sense of whether the answers being sought are likely to be tentative or robust, and some examples of nearby questions that would be out of scope for the project to investigate.
- The project will be led by a person with a track record of producing high-quality research.
- We’re open to applicants without such track records if the project is particularly well-scoped and promising. Especially in this case, it can be helpful to have named advisors on the project with relevant subject matter expertise and/or research experience.
- The project will be conducted at or in close collaboration with an established research institution.
- While we’re open to funding independent researchers, we generally prefer applicants working with institutions such as think tanks or academic labs that can support project planning, execution, and dissemination.
- The research produced would potentially have important implications for strategy, frontier company policy, and/or government policy, even if that’s not the primary focus.
- While the nature of the implications will often depend on the outcome of the research, we think that asking whether any of the most plausible outcomes would have concrete implications for strategy or policy can be a useful test of how useful the work is likely to be.
2.2 Training or Mentorship Programs
We’re interested in funding activities that train talented individuals for impactful roles in the above areas, or that help individuals working in government, industry, and other relevant sectors understand the potential impacts from advanced AI and available risk mitigation options. Examples of activities include fellowship programs, mentorship programs, residence programs, workshops, educational courses, and other programs that help individuals focused on AI governance develop career capital. Programs in this category could be standalone offerings or housed within a larger organization.
2.2.1 Examples of work these types of grants could fund
- The Wilson Center’s Artificial Intelligence Lab
- Summer/Winter Research Fellowships at the Center for the Governance of AI
- Horizon Institute for Public Service Fellowship
- BlueDot Impact AI Governance Course
- AI Policy Fellowship at Institute for AI Policy and Strategy
- EU Tech Policy Fellowship
- Visiting Researcher Program at Constellation
- Governance-focused versions of FAR AI’s AI Alignment Workshop
2.2.2 What are we looking for?
We expect that projects we choose to support will have at least one of the following features:
- The project is proposed by an individual or organization with a solid track record of impactful work in relevant areas.
- The proposal makes a compelling case that the project can further the broader goal of mitigating global catastrophic risks from transformative AI.
2.3 General support for an existing organization
General support grants provide unrestricted funding to organizations,[5]Please note that we use the term “organization” in a broad sense, which can refer to various entities such as departments or labs within a larger institution (e.g., a university), or projects that are being fiscally sponsored by another organization. Also note that we can only provide … Continue reading allowing them to allocate funds according to their own priorities and needs. This type of grant is typically awarded to organizations with a proven track record of success in their specific area of focus.
Please note that the broader scope and increased complexity of assessing general support requests mean that we have a higher standard for awarding unrestricted funding compared to project-specific grants. In cases where we offer general support, we expect an organization’s future plans to closely align with its own previous successful work. If your organization’s future plans diverge significantly from your past successes, or if you are seeking funding for a specific project, we recommend applying for a project grant instead.
Previous grants in this category can be found here, though please note that they span several years and that our strategic picture and funding priorities can (and do) change over time.[6]This list includes grants related to technical AI safety, as well as AI governance and policy.
For more on the distinction between general support and specific projects, see this in the FAQ.
2.3.1 What are we looking for?
We expect that successful proposals will have many, but not necessarily all, of the following features:
- The organization applying has a solid track record of impactful work in areas directly relevant to their proposal.
- The organization demonstrates evidence of responsible financial management (e.g., a track record of maintaining up-to-date records and appropriate runway) and strategic planning capabilities (e.g., a history of successfully completing projects on time and within budget).
- The proposal provides a clear rationale for how funds will support the organization’s goals and contribute to our mission of reducing global catastrophic risks from transformative AI.
2.4 Other Projects
While the above focus areas and grant categories cover our current priorities, we think it is likely that there are impactful grant opportunities that will not neatly fall within these lines. Therefore, we are open to other EOIs, though we expect that EOIs from outside the three proposal types above will be less likely to be successful by default.
While proposals for specific projects that would be housed in a new organization may be considered, we would generally discourage EOIs proposing the creation of new organizations with broad or multifaceted objectives. We recognize that many effective organizations pursue multiple complementary objectives. However, for new organizations, we generally believe it’s better to start with a focused mission and develop strong core competencies before gradually expanding into additional areas over time.
If you are an individual who is seeking funding for independent activities such as graduate study, unpaid internships, independent study, career transition and exploration periods, and other activities relevant to building career capital, you should apply to our career development and transition funding program instead.
2.4.1 Examples of work these types of grants could fund
- AI governance workshops, conferences, and other events
- AI cybersecurity bug bounties
- Online tools which allow people to understand the capabilities of current models, such as the AI Digest
- An AI incident tracking database
- A publication that explores the latest developments related to AI governance and safety
- A website that compiles and tracks a specified set of information relevant to AI governance, such as safety commitments made by various AI companies, or policy proposals and legislation
- A tool or platform that enables more effective monitoring of AI systems’ capabilities
2.4.2 What are we looking for?
We expect that projects we choose to support will have many, but not necessarily all, of the following features:
- The project is proposed by an individual or organization with a solid track record of impactful work in relevant areas.
- The proposal makes a compelling case that the project can further the broader goal of mitigating global catastrophic risks from transformative AI.
3. Criteria by which applications will be assessed
We evaluate applications holistically. While the following list is non-exhaustive, some key considerations of our investigations include:
- Theory of change: Does the project have a plausible strategy for reducing the likelihood and/or severity of global catastrophic risks from transformative AI? Are the proposed activities to be funded justified by this theory of change?
- Track record: Does the applicant have a history of successfully managing similar projects, or show evidence of promise for future success? We evaluate both past achievements and other indicators that suggest the applicant has the potential to effectively execute the project.
- Strategic judgment: Does the applicant demonstrate strong strategic judgment? We value grantees who have the ability to make well-thought-out decisions under uncertainty and/or time constraints, demonstrate reasoning transparency, and are scope sensitive.
- Project risks: Has the applicant identified significant potential failure modes and downside risks associated with the project? The choice of whether to list a risk should be sensitive to the scale and likelihood of different risks; rather than list off every possible risk, applicants should demonstrate that they can prioritize appropriately and accurately model tradeoffs. We don’t require that projects have no risk, but we do look for indications that applicants are aware of potential risks and can prioritize how to respond to them.
- Cost-effectiveness: Is the proposed budget reasonable and well-justified given the project’s goals and planned activities? This evaluation also takes into account any additional costs the project may impose on others, such as the time required from evaluators or collaborators.
- Scale: When funding a project or organization for the first time, we typically (but not always) prefer to begin with a grant of $200k-$2M/yr over 1-2 years, and then consider renewing the grant (possibly at a larger scale) after we have more evidence of project success. In some cases, if clear evidence of success is available early, we might offer a “top up” grant to expand the project before the initial grant period has expired. Projects smaller than $200k/yr should ideally be funded by other funders in the space,[7]Examples of other funders with an interest in some of the topics eligible for funding through this RFP include SFF, LTFF, Founders Pledge, AI Risk Mitigation Fund, and Longview Philanthropy. but you are still welcome to apply here at smaller scales if your proposal has been rejected by other funders.
The EOI stage primarily focuses on assessing the first three criteria listed above, though these are also examined more thoroughly, along with the remaining criteria, during the full application review process.
4. Application process
4.1 Time suggestion
We suggest that you aim to spend no longer than one hour filling out the Expression of Interest (EOI) form, assuming you already have a plan you are excited about. Our application process deliberately starts with an EOI rather than a longer intake form to save time for both applicants and program staff.
4.2 Feedback
We are not planning to provide feedback for EOIs in most instances. We are expecting a high volume of submissions and want to focus our limited capacity on evaluating the most promising proposals and ensuring applicants hear back from us as promptly as possible.
4.3 What’s next after the Expression of Interest (EOI)
Our aim is to respond to all applicants within three weeks of receiving their EOI, with most applicants hearing back within one. In some cases we may need additional time to respond to an EOI, for example if it demands consultation with external advisors who have limited bandwidth, or if we receive an unexpected surge of EOIs at a time when we are low on capacity.
If your EOI is successful, you will then typically be asked to fill out a full proposal form. Assuming you have already figured out the details of what you would like to propose, we expect this to take 2-6 hours to complete, depending on the complexity and scale of your proposal.
Once we receive your full proposal, we’ll aim to respond within three weeks about whether we’ve decided to proceed with a grant investigation (though most applicants will hear back much sooner). If so, we will introduce you to the person who will be investigating the grant. At this stage, you’ll have the opportunity to clarify and evolve the proposal in dialogue with the grant investigator, and to develop a finalized budget. See this page for more details on the grantmaking process from this stage.
5. Other information
- There is neither a maximum nor a minimum number of applications we intend to fund; rather, we intend to fund (and/or recommend to other philanthropists) any applications that seem sufficiently promising to us to be above our general funding bar for this program.
- In some cases, we may ask outside advisors to help us review and evaluate applications. By submitting your application, you agree that we may share your application with our outside advisors for evaluation purposes.
- We encourage individuals with diverse backgrounds and experiences to apply, especially self-identified women and people of color.
- We plan to respond to all EOIs.
- We may make changes to this program from time to time. Any such changes will be reflected on this page.
6. Frequently Asked Questions
See our FAQ
6.1 What are the differences between “other projects” and general support?
“Other project” grants are awarded for specific, well-defined activities that align with our mission of reducing catastrophic risks from advanced AI systems, but fall outside the categories of research projects or training and mentorship programs. In contrast, general support grants provide unrestricted funding to established organizations, allowing them to allocate the funds according to their own priorities and needs.
We typically award general support to organizations with a proven track record of impactful work that closely aligns with our mission. Please note that we have a higher standard for awarding general support compared to project-specific grants, due to the broader scope and increased complexity of assessing these requests. If your proposal diverges significantly from your organization’s past work, we recommend applying for an “other project” grant instead.
6.2 Can my project cover multiple “subject areas”?
Yes. It is fine if your project covers multiple different subject areas, though please note that we expect to be less excited about subject areas other than the ones we list here.
6.3 Can I submit multiple proposal types in my EOI?
Yes, you are welcome to pick multiple proposal types when filling out your EOI. If you are then invited to submit a full proposal, you will need to choose one primary proposal type. Don’t worry too much about picking the perfect category — just select the one that seems like the best fit. The full proposal forms are similar across categories, and if we have any additional questions about your project, we will follow up with you directly.
6.4 Can I apply for funding to start a new organization?
Founding entirely new organizations with multiple objectives and work streams will typically be out of scope for this RFP. However, for a variety of reasons it sometimes makes sense to house a focused project within a new organization. In such cases, we would consider supporting the creation of a new entity, provided that its goals and activities are well-defined.
6.5 Is there a maximum grant size for each type of proposal?
We do not have predefined maximum grant sizes for the different proposal types. The amount of funding awarded will depend on the specifics of each project, such as its scope, duration, and potential impact. We evaluate each proposal individually to determine the appropriate level of support.
6.6 Can an individual or organization submit more than one EOI, either within the same subject area or across different areas?
Yes, though we encourage applicants to focus on developing their strongest ideas. If you have multiple distinct project ideas, you are welcome to submit separate EOIs for each one. Please keep in mind that each EOI should be self-contained and provide a clear, compelling case for its potential impact.
6.7 Are there any geographic restrictions on who can apply for funding? Can individuals or organizations from any country submit an application?
Individuals and organizations from any country are welcome to submit applications. However, grants to organizations not recognized as charitable entities under U.S. law will require additional due diligence. Applicants should also be aware that different jurisdictions may have specific rules and regulations that govern the receipt of foreign funds, which could necessitate additional compliance measures that may delay the granting process. Open Phil will only recommend funding when it is able to do so consistent with applicable law.
6.8 If you’ve already funded us before, should we still fill in the form?
If we’ve funded you before, we recommend reaching out directly to the grant investigator who managed your most recent funding request. They will be best equipped to advise you on the appropriate next steps, which may include submitting an EOI or following a different process.
Click here to submit an EOI
Footnotes
1 | Though the nature and extent of these risks are still an ongoing area of scientific inquiry and debate, we think it is plausible that rapid progress in AI could lead to extreme risks, including (but not limited to):
|
---|---|
2 | Each of these philanthropists gives far less to this issue each year than Open Philanthropy, and they are typically only interested in a narrow subset of opportunities we recommend. Also, we are typically not their only philanthropic advisors. Currently, these other philanthropists (in aggregate) do not act on most of the recommendations we make to them. Nevertheless, their total giving to opportunities in this space is substantial, and will probably be >$50M in 2024. This means that applications to this RFP are substantially more likely to be funded if applicants give us permission to share them with other funders, but also that this area is still very funding-constrained, and that some proposals we like will nevertheless not be funded. We continue to encourage other philanthropists to enter the field and/or scale up their existing giving in the area. For example: total philanthropy motivated by AI catastrophic risk mitigation is probably <$200M/yr today, whereas philanthropy gave ~$10B to climate risk mitigation in 2022 (ClimateWorks, p. 5), though it’s hard to say how comparable those two figures are.
If you are a philanthropist interested in supporting projects of the sort covered by this RFP, we would greatly appreciate you getting in touch. This will help us understand the broader funding landscape and potentially connect promising applicants with additional funders. We are especially keen to hear from funders who are looking to give away $500K/yr or more. Please note that reaching out does not commit you to anything; it simply opens up a line of communication to explore potential funding opportunities and helps us understand the types of projects you may be interested in supporting. |
3 | Our ability to make awards may be limited by law or policy. Our grant awards are conditional on completing legal due diligence prior to payment. We are open to making restricted grants to projects housed within for-profit companies, though we expect this to be somewhat rare, for logistical reasons. Other philanthropists to whom we might recommend specific opportunities may have different limits or due diligence practices. |
4 | See the following non-exhaustive list of suggested readings:
|
5 | Please note that we use the term “organization” in a broad sense, which can refer to various entities such as departments or labs within a larger institution (e.g., a university), or projects that are being fiscally sponsored by another organization. Also note that we can only provide general support grants to 501(c)(3)s, their international equivalents, and 501(c)(4)s. |
6 | This list includes grants related to technical AI safety, as well as AI governance and policy. |
7 | Examples of other funders with an interest in some of the topics eligible for funding through this RFP include SFF, LTFF, Founders Pledge, AI Risk Mitigation Fund, and Longview Philanthropy. |