This program aims to provide support – in the form of funding for graduate study, unpaid internships, independent study, career transition and exploration periods, and other activities relevant to building career capital – for individuals at any career stage who want to pursue careers that could help to reduce global catastrophic risks or otherwise improve the long-term future.1

Apply here.

We’re especially interested in supporting individuals who want to pursue careers that are in some way related to mitigating potential risks posed by future advances in artificial intelligence or global catastrophic biological risks.

Applications are open until further notice and will be assessed on a rolling basis.

Generally speaking, we aim to review proposals within 6 weeks of receiving them, although this may not prove possible for all applications. Candidates who require more timely decisions can indicate this in their application forms, and we may be able to expedite the decision process in such cases.

Until recently, this program was known as the “early-career funding program”, but we’ve decided to broaden its scope to explicitly include later-career individuals.

1. Scope

We’re open to receiving applications from individuals who are already pursuing careers related to reducing global catastrophic risk (or otherwise improving the long-term future), looking to transition into such careers from other lines of work, or only just starting their careers. We think there are many career tracks which are potentially promising from this perspective (including many of the ones in this list from 80,000 Hours), and there is therefore a correspondingly wide range of proposals we would consider funding.

We’re open to supporting a variety of career development and transition activities, including (but not necessarily limited to) graduate study, unpaid internships, independent study, career transition and exploration periods, postdocs, obtaining professional certifications, online courses, and other types of one-off career-capital-building activities.

To name a few concrete examples of the kinds of applicants we’re open to funding, in no particular order:

  • A final-year undergraduate student who wants to pursue a master’s or a PhD program in machine learning in order to contribute to technical research that helps mitigate risks from advanced artificial intelligence.
  • An individual who wants to do an unpaid internship at a think tank focused on biosecurity, with the aim of pursuing a career dedicated to reducing global catastrophic biological risk.
  • A former senior ML engineer at an AI company who wants to spend six months on independent study and career exploration in order to gain context on and investigate career options in AI risk mitigation.
  • An individual who wants to attend law school or obtain an MPP, with the aim of working in government on policy issues relevant to improving the long-term future.
  • A recent physics PhD who wants to spend six months going through a self-guided ML curriculum and working on projects in interpretability, in order to transition to contributing to technical research that helps mitigate risks from advanced AI systems.
  • A software engineer who wants to spend the next three months doing independent study in order to gain relevant certifications for a career in information security, with the longer-term goal of working for an organization focused on reducing global catastrophic risk.
  • An experienced management consultant who wants to spend three months exploring different ways to apply their skill set to reducing global catastrophic risk and applying to relevant jobs, with an eye to transitioning to a related career.
  • A PhD graduate in an unrelated sub-area of computational biology who wants to spend four months getting up to speed on DNA synthesis screening in order to transition to working on this topic.
  • A professor in machine learning, theoretical computer science, or another technical field who wants funding to take a one-year sabbatical to explore ways to contribute to technical AI safety or AI governance.
  • An individual who wants to attend journalism school, with the aim of covering topics relevant to the long-term future (potentially among other important topics).

2. Funding criteria

  • This program aims to provide support for individuals who want to pursue careers that could help to reduce global catastrophic risk or otherwise improve the long-term future. We are particularly interested in funding people who have deeply engaged with questions about global catastrophic risk and/or the long-term future, and who have skills and abilities that could allow them to make substantial contributions in the relevant areas.
  • Candidates should describe how the activity for which they are seeking funding will help them enter or transition into a career path that plausibly allows them to make these contributions. We appreciate that candidates’ plans may be uncertain or even unlikely to work out, but we are looking for evidence that candidates have thought in a critical and reasonably detailed manner about those plans — not just about what career path(s) might open up for them, but also about how entering said career path(s) could allow them to reduce global catastrophic risk or otherwise positively impact the long-term future.
  • We are looking to fund applications where our funding would make a difference — i.e. where the candidate is otherwise unable to find sufficient funding, or the funding they were able to secure imposes significant restrictions or requirements on them (for example, in the case of graduate study, restrictions on their research focus or teaching requirements). We may therefore turn down promising applicants who were able to secure equivalent support from other sources.
  • If you receive a grant from us through this program, we will ask that you notify us about any other income or funding (e.g. paid work, fellowships, grant income, etc.) that you receive during the grant period. If your funding relates to a degree program, we will also ask you to notify us of any changes to your enrollment status. Depending on the nature of any additional income/enrollment status changes, we may alter the grant amount or timeframe in accordance with this policy.
  • The program is open to applicants in any country.2

3. Other information

  • There is neither a maximum nor a minimum number of applications we intend to fund; rather, we intend to fund any applications that seem sufficiently promising to us to be above our general funding bar for this program.
  • In some cases, we may ask outside advisors to help us review and evaluate applications. By submitting your application, you agree that we may share your application with our outside advisors for evaluation purposes.
  • We encourage individuals with diverse backgrounds and experiences to apply, especially self-identified women and people of color.
  • We plan to respond to all applications.
  • This program now subsumes what was previously called the Open Philanthropy Biosecurity Scholarship; for the time being, candidates who would previously have applied to that program should apply to this program instead. (We may decide to split out the Biosecurity Scholarship again as a separate program at a later point, but for practical purposes, current applicants can ignore this.) 
  • We may make changes to this program from time to time. Any such changes will be reflected on this page.

4. Past Recipients

Robi Rahman

Robi was awarded a CDTF grant to support his transition from chemical engineering to AI safety research. After three years of earning to give, Robi recognized the potential impact of AI on humanity’s future and resolved to redirect his career.

While completing his master’s degree in data science from Harvard University, Robi received a CDTF grant to undertake four months of part-time AI self-study and internships. During this period, he gained hands-on experience in machine learning by developing an endangered wildlife identification algorithm for WildTrack, a conservation nonprofit, followed by a part-time role at the Stanford Institute for Human-Centered Artificial Intelligence, doing research and writing for the 2023 AI Index Report.

The grant allowed Robi to focus on technical skill development without financial strain, bridging the gap between his existing skills and those needed to work in AI safety.

Robi now works at Epoch AI as a data scientist, where he researches trends and questions about the trajectory and governance of AI. You can read his most recent work here.

Cecil Abungu

Cecil was awarded two CDTF grants that kickstarted his career in AI safety–related governance.

The first grant, awarded in early 2020, helped Cecil transition from his focus on constitutional law to AI governance research. This grant gave Cecil the chance to fully immerse himself in AI governance over the course of a year, with two key research periods: six months at the Institute for Law and AI (formerly the Legal Priorities Project) and another six months with the Centre for the Study of Existential Risk’s AI:FAR team. Cecil delved deep into the field during this period, reading extensively, watching relevant lectures, and participating in research seminars as both an attendee and presenter. This exploration laid a strong foundation for his future work and connected him with leading experts.

In late 2021, Cecil received a second CDTF grant to support his PhD studies at the University of Cambridge, where he focuses on law and AI. This grant has been crucial in enabling Cecil to balance his doctoral research with his ongoing work at the Initiative for Longtermism in Africa (ILINA), an Open Philanthropy-funded program dedicated to building capacity for AI governance research and policy engagement in Africa. “Without the grant, I would probably have to do other kinds of work to finance my PhD”, Cecil shared.

Looking forward, Cecil hopes that his research will help policymakers take AI risks more seriously. “I want to positively influence the thinking of other AI safety researchers and help more talented Africans begin researching AI safety–related governance in a consequential way.”

Jack Parker

Jack was awarded a CDTF grant to facilitate his shift from education to AI safety. After spending two years as a middle and high school math teacher, Jack was inspired by readings from 80,000 Hours to rethink his career path and consider how to maximize his impact.

The CDTF grant provided Jack with a living stipend while he pursued a Master’s degree in machine learning at Duke University. This support allowed Jack to dedicate his full attention to his studies without needing to take on a part-time job. 

At Duke, where he was awarded a full tuition scholarship, Jack honed essential skills in programming, machine learning fundamentals, and computer security. Early in his second year, Jack became particularly interested in developing security measures to protect AI models from adversarial attacks and unauthorized access.

Upon graduation, Jack joined the start-up HiddenLayer as an ML Threat Operations Specialist. In this role, he concentrates on identifying and mitigating threats to AI systems, such as spotting suspicious inputs that could compromise detection algorithms.

Reflecting on his career trajectory, Jack highlighted the transformative impact of his CDTF grant: “I’m pretty confident that I would not have been able to make the transition nearly as smoothly, if at all, had it not been for this grant. It was a game changer.”

5. Application process

Applications are open until further notice. You can apply using this form.

Required application materials:

  • Proposal, no longer than 500 words
  • Personal statement, no longer than 500 words
  • Approximate budget, no longer than half a page
  • CV or resume, no longer than 2 pages
  • Academic transcript (undergrad and graduate, if applicable)
  • Answers to a few other questions (see application form)

We may contact you to request additional information. And in some cases, the assessment may also involve a brief interview, via video teleconference.

We are aware that if you are applying for graduate school or internships, you will typically not know at this point which specific programs will admit you. If you are applying to several different but related programs and are doing so with a similar career trajectory in mind (e.g. different law schools and MPP programs to pursue a career in public policy), please specify this in your proposal and budget and submit a single application. If you are applying to several rather different programs with clearly distinct career trajectories in mind (e.g. journalism schools and history PhD programs), please submit separate applications.

If you have questions, you can reach us via careerdevelopmentfunding@openphilanthropy.org.

Expand Footnotes Collapse Footnotes

1.By “improving the long-term future”, we specifically mean actions that could positively affect the very long-run trajectory of civilization over millions of years or even longer timeframes, as discussed for example by Beckstead (2013) or Greaves & MacAskill (2019). One way to affect the long-term future is to mitigate the risk of human extinction (see “The Precipice” for a recent discussion), but there may be other ways to improve the long-run trajectory of civilization (see this post by 80,000 Hours for some potential ideas).

2.However, we may decline to make an award if we are not able to comply with local laws.