Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than over our full prior history. We’ve more than doubled the size of our team (to ~110), nearly doubled our annual giving (to >$750M), and added five new program areas.
As our track record and volume of giving have grown, we are seeing more of our impact in the world. Across our focus areas, our funding played a (sometimes modest) role in some of 2023’s most important developments:
- We were among the supporters of the clinical trials that led to the World Health Organization (WHO) officially recommending the R21 malaria vaccine. This is the second malaria vaccine recommended by WHO, which expects it to enable “sufficient vaccine supply to benefit all children living in areas where malaria is a public health risk.” Although the late-stage clinical trial funding was Open Philanthropy’s first involvement with R21 research, that isn’t the case for our new global health R&D program officer, Katharine Collins, who invented R21 as a grad student.
- Our early commitment to AI safety has contributed to increased awareness of the associated risks and to early steps to reduce them. The Center for AI Safety, one of our AI grantees, made headlines across the globe with its statement calling for AI extinction risk to be a “global priority alongside other societal-scale risks,” signed by many of the world’s leading AI researchers and experts. Other grantees contributed to many of the year’s other big AI policy events, including the UK’s AI Safety Summit, the US executive order on AI, and the first International Dialogue on AI Safety, which brought together scientists from the US and China to lay the foundations for future cooperation on AI risk (à la the Pugwash Conferences in support of nuclear disarmament).
- The US Supreme Court upheld California’s Proposition 12, the nation’s strongest farm animal welfare law. We were major supporters of the original initiative and helped fund its successful legal defense.
- Our grantees in the YIMBY (“yes in my backyard”) movement — which works to increase the supply of housing in order to lower prices and rents — helped drive major middle housing reforms in Washington state and California’s legislation streamlining the production of affordable and mixed-income housing. We’ve been the largest national funder of the YIMBY movement since 2015.
We’ve also encountered some notable challenges over the last couple of years. Our available assets fell by half and then recovered half their losses. The FTX Future Fund, a large funder in several of our focus areas, including pandemic prevention and AI risks, collapsed suddenly and left a sizable funding gap in those areas. And Holden Karnofsky — my friend, co-founder, and our former CEO — stepped down to work full-time on AI safety.
Throughout these changes, we’ve remained devoted to our mission of helping others as much as we can with the resources available to us. But it’s a good time to step back and reflect.
The rest of this post covers:
- Brief updates on grantmaking from each of our 12 programs. [More]
- Our leadership changes over the past year. [More]
- Our chaotic macro environment over the last couple of years. [More]
- How that led us to revise our priorities, and specifically to expand our work to reduce global catastrophic risks. [More]
- Other lessons we learned over the past year. [More]
- Our plans for the rest of 2024. [More]
Because it feels like we have more to share this year, this post is longer and aims to share more than I have the last two years. I’m curious to hear what you think of it — if you have feedback, you can find me on Twitter/X at @albrgr or email us at info@openphilanthropy.org.
Program updates
The diverse areas we work in may not appear to have a unifying theme, but we chose them all through the same process. In order to achieve our mission, we aim to maximize the cost-effectiveness of our giving. And to do that, we seek out causes that are some combination of:
- Important — They have a substantial impact across a large number of individuals.
- Neglected — They get little attention from others, especially other philanthropists, relative to their scope.
- Tractable — They offer clear opportunities for us to support progress.
Sometimes, we are one funder among many in an area that is well-understood and well-funded, but big enough that we still find many strong opportunities. Other times, we enter an area that receives virtually no funding, which can be a sign of great promise but also great uncertainty. We’re open to both types of grantmaking and currently pursue a mix of hits-based and evidence-based opportunities.
This approach guided the hundreds of grants we made last year across our 12 active focus areas, three of which we added last year.
Here are some updates on grantmaking from our more established programs:
Global Health and Development: Our largest grants in this program went toward charities recommended by GiveWell, targeting issues like deworming, malaria, and vitamin A supplementation (those last two grants total $129 million, and are the largest in Open Phil’s history). We also funded anti-poverty research in Ethiopia.
Scientific Research: Most of our science funding this year went toward health-related research, from testing vaccines to studying parasites. The projects we funded typically (though not always) targeted health issues that are especially prevalent in the developing world, since that kind of work is highly impactful and habitually underfunded.
Farm Animal Welfare: We supported projects focused on improving conditions for chicken and fish, and on building advocacy groups around the world. This was also a busy year for grants supporting new welfare-improving technology in areas like fish slaughter and reducing the need for chick culling.
Land Use Reform: Our grantmaking this year was largely focused on organizations in the YIMBY movement, which work to increase housing supply and have begun to win major victories across the US.
Global Aid Policy: This program aims to fund effective strategies for increasing aid levels and boosting the impact of current aid spending. Last year, the team built out work in DC to support evidence-informed aid and expanded its grantmaking in Japan and Korea, partly by funding and attending parliamentary delegations to educate policymakers about impactful health programs. It also scoped and initiated grantmaking in new areas, including support for work in Scandinavia, as well as work to grow support for multilateral global health organizations in emerging donor countries.
Effective Altruism (Global Health and Wellbeing): This program has supported a number of groups that raise leveraged funding for effective charities (like GiveWell’s recommendations), and organizations like Charity Entrepreneurship, which helps people start charities focused on neglected issues in global health and animal welfare.
Potential Risks from Advanced AI: We continued to fund an ecosystem of technical infrastructure, institutions, and research projects working to address the technical and governance challenges posed by rapidly improving AI capabilities. To better understand the onslaught of new AI models and other tools released in the last year, we issued a call for two kinds of proposals — one for new research on benchmarking large language model agents, and the other for studying and forecasting the impacts of large language model systems. (Both are still open to new applications!)
Biosecurity and Pandemic Preparedness: We made grants across many different institutions pursuing research on potentially catastrophic biosecurity risks — including universities, think tanks, and the World Health Organization. We also launched a request for information on the potential use of far-UVC light to reduce pathogen transmission.
Global Catastrophic Risks Capacity Building: The team led our work to source applications from grantees affected by the FTX Future Fund’s collapse, which led us to strong opportunities across our GCR portfolio. Their other work covered fundraising, education, career coaching, media production, career transitions, research mentorship, and university groups around the world. We have a number of opportunities for funding in this area — if you’ve thought about working in this space, consider applying!
Beyond our existing focus areas, Open Philanthropy continues to actively seek out the new areas where an additional dollar can go the furthest. We launched these focus areas in 2023:
Global Health R&D: Diseases that primarily affect the world’s poorest people tend to get much less research and development funding than they should, given their considerable burden. This program seeks to fund R&D for drugs, diagnostics, and other products to reduce that burden — as well as efforts to make those products more accessible. Early grants in this program have supported (among other projects) development of monoclonal antibodies as malaria interventions, clinical trials for tuberculosis treatment, and market shaping for sickle cell diagnostics and treatment. Relative to our original Scientific Research team, this team tends to be focused on later-stage development, though the teams work together closely and report to the same person.
Global Public Health Policy: Non-communicable diseases account for a large and growing share of the world’s health burden. Public health policy can alleviate this burden by addressing environmental and behavioral risk factors. This program seeks to help governments implement more effective public health policy, and is initially focused on expanding its work in four areas where we’ve already made grants: South Asian air quality, lead exposure, alcohol policy, and pesticide suicide prevention.
Innovation Policy: Economic growth and scientific innovation have lifted billions of people out of poverty and improved health outcomes around the world. This program aims to accelerate growth and innovation through a number of different strategies — without unduly increasing risks from emerging technology. Early grants in the program have supported high-skilled immigration, replication studies, and research on science and innovation.
Finally, I’ll highlight a set of grants we didn’t make — because we funded other people to make them instead.
Last year, with co-funding from Lucinda Southworth, we completed a $150 million Regranting Challenge, supporting exceptional grantmakers outside of Open Philanthropy to tackle projects and ideas outside of our program areas. We can’t — and shouldn’t need to — launch a new program ourselves for every problem worth working on, and I’m thrilled that the Regranting Challenge helped us find strong opportunities to address issues like malnutrition, education, and climate change. I’m particularly excited about a major project one of the recipients funded: a large clinical trial of tuberculosis vaccine MTBVAC, a leading candidate for efficacy in adults and adolescents.
Recent leadership changes
Open Philanthropy began life as GiveWell Labs, an initiative within GiveWell that was designed to think more expansively about finding outstanding giving opportunities. I was one of the co-founders of GiveWell Labs, along with Holden Karnofsky, who became the Executive Director of Open Philanthropy when we spun off as an independent organization in 2017.
I led our initial work picking causes, especially in US policy, and then oversaw our work on Global Health and Wellbeing, focusing on areas like animal welfare, scientific research, and land use policy. In 2021, Holden asked me to join him as co-CEO, and last year, when he decided to focus on AI risk full-time, the board appointed me as sole CEO.
As I stepped into my new job in July 2023, I promoted Emily Oehlsen to Managing Director and she took over my previous role overseeing Global Health and Wellbeing. It has been amazing to watch Emily grow as a leader over her time at Open Philanthropy, and I am excited to see what she does with her new responsibilities.
As part of a broader wave of hiring, we brought on several people to join the leadership team. Eric Parrie joined us as Managing Director of Operations and Jasmine Dhaliwal became our new Chief of Staff. Howie Lempel, one of our earliest employees, returned to Open Philanthropy after 7 years away, and is now serving as Senior Advisor and Interim Managing Director for our Global Catastrophic Risks work.
As CEO, I work more closely with Cari Tuna and Dustin Moskovitz, our primary funders and founding board members, than I had in the past. Dustin and especially Cari were very involved at the founding of Open Philanthropy — our grant approval process in the very early days was an email to Cari. But their level of day-to-day involvement has ebbed and flowed over time. Cari, in particular, has recently had more appetite to engage, which I’m excited about because I find her to be a wise and thoughtful board president and compelling champion for Open Philanthropy and our work. Dustin has also been thinking more about philanthropy and moral uncertainty recently, as reflected in this essay he posted last month.
It’s worth noting that their higher level of engagement means that some decisions that would have been made autonomously by our staff in the recent past (but not in the early days of the organization) will now reflect input from Cari and Dustin. Fundamentally, it has always been the case that Open Philanthropy recommends grants; we’re not a foundation and do not ultimately control the distribution of Cari and Dustin’s personal resources, though of course they are deeply attentive to our advice and we all expect that to continue to be the case. All things considered, I think Cari and Dustin have both managed to be involved while also offering an appropriate — and very welcome — level of deference to staff, and I expect that to continue.
A rollercoaster couple of years
The macro environment in which we operate has been particularly volatile over the last couple of years with a decline in our available assets, the collapse of the FTX Future Fund, and the surge of interest in AI.
Over the course of 2022, our available assets fell by roughly half, though they have since recovered about half of the total losses (i.e., they are now down about 25% from the end of 2021). Much — but by no means all — of this volatility was driven by changes in the price of Meta stock (which Dustin was heavily exposed to because he cofounded the company). Dustin has continued to gradually diversify out of Meta over this period, so he and we are less (but still somewhat) exposed to those specific price swings going forward.
The change in available assets, along with other factors, led us to raise the cost-effectiveness bar for our grants to Global Health and Wellbeing by roughly a factor of two. That means that for every dollar we spend, we now aim to create as much value as giving $2,000 to someone earning $50,000/year (the anchor for our logarithmic utility function). That roughly equates to giving someone an extra year of healthy life for every ~$50 we spend.
In late 2022, the cryptocurrency exchange FTX also rapidly collapsed. Before its collapse, the founder and CEO of FTX, Sam Bankman-Fried, had quickly become a major funder of work on global catastrophic risks via his Future Fund, supporting some of the same organizations we had been funding.
To be clear, Open Philanthropy never received any donations from FTX, the Future Fund, or any of the individuals who worked there. And shortly before the collapse, Holden wrote a post expressing concern with the reckless, maximizing strand of effective altruist (EA) thinking that Bankman-Fried has come to illustrate. From that post:
“I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have – a constant current that we need to swim against.”
When the fraud at FTX was exposed, we felt angry and shocked. So many people and organizations were victimized by the company’s callous risk-taking and criminal behavior. Among the many who suffered due to Bankman-Fried’s fraud were grantees of the Future Fund, who had made decisions based on funding promises that never materialized or were clawed back. We stepped in to help some of those organizations. But the sudden evaporation of such a large funder also influenced the math behind our own planning.
Once the Future Fund was no longer there to fund projects to reduce global catastrophic risks, stronger marginal funding opportunities became available to us. Early in 2023, we lifted a pause on new funding commitments to our Global Catastrophic Risks portfolio, and since then we’ve been ramping up our work in those areas. In 2023, this expansion was more about staffing than funding: our grantmaking to confront catastrophic risks hasn’t increased much yet, but we launched a big hiring round to increase our grantmaking capacity for future years.
Finally, over the last two years, generative AI models like ChatGPT have captured public attention and risen to remarkable prominence in policy debates. While we were surprised by the degree of public interest, we weren’t caught off guard by the underlying developments: since 2015, we’ve supported a new generation of organizations, researchers, and policy experts to address the potential risks associated with AI. As a result, many of our grantees have been working on this issue for years, and they were well-prepared to play important roles in the policy debate about AI as it came to the fore over the last year.
Without the efforts we’ve made to develop the field of AI risk, I think that fewer people with AI experience would have been positioned to help, and policymakers would have been slower to act. I’m glad that we were paying attention to this early on, when it was almost entirely neglected by other grantmakers. AI now seems more clearly poised to have a vast societal impact over the next few decades, and our early start has put us in a strong position to provide further support going forward.
But the sudden uptick in policymaker and public discussion of potential existential risks from AI understandably led to media curiosity (and skepticism) about our influence. Some people suggested that we had an undue influence over such an important debate.
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. In the meantime, we are supporting a diverse range of viewpoints: while we are focused on addressing global catastrophic risks, our grantees (and our staff) disagree profoundly amongst themselves about the likelihood of such risks, the forms they could take, and the best ways to address them.[1]Some examples of diverse views among our grantees:
A recent Politico piece discussed a paper from researchers who found that AI language models could help people with little training develop new pathogens, and a paper from RAND which reported that current language models didn’t increase the risk … Continue reading
The big changes buffeting Open Philanthropy from different directions led us to revisit our funding priorities. At the end of 2022, we started an internal process to make a recommendation to Cari and Dustin about how they should allocate funding across the two main areas where we work: Global Health and Wellbeing (GHW) and Global Catastrophic Risks (GCR). The process concluded that on balance, the factors above strengthen the case for allocating marginal funding to mitigate global catastrophic risks if we can find sufficiently cost-effective opportunities: While these factors generally make us think it would be more impactful to allocate more of our resources to reducing global catastrophic risks, we also want to avoid the rashness and profligacy that characterized some of the “FTX era” of GCR funding. Accordingly, after the FTX collapse, we raised our cost-effectiveness bar for GCR spending.[2]Note that this post refers to “longtermist” funding. We changed the “Longtermism” portfolio name to “Global Catastrophic Risks” last year. The new name better reflects our view that AI risk and biorisk aren’t only “long-term” issues; we think that both could threaten the … Continue reading As a result of our internal process, we decided to keep that new higher bar, while also aiming to roughly double our GCR spending over the next few years — if we can find sufficiently cost-effective opportunities. (We don’t want to just increase the budget if it means accepting a lower level of cost-effectiveness, at least right now.) We believe that may be possible because of the rapid growth in the GCR opportunity set: we estimate that due to the growth in the community of people working to reduce global catastrophic risks, the set of fundable opportunities above our bar has been increasing by around 50% per year over the past few years (though preliminary evidence suggests that may be slowing more recently). If that growth slows dramatically or stops, we would eventually face a choice between lowering our bar on the GCR side or re-allocating funding to another portfolio area, and we have not yet made a decision about which we would choose. In our GHW portfolio, we decided — and announced last year — that we would scale back our donations to GiveWell’s recommendations to $100M/year, the level they were at in 2020. At the same time, we are planning to maintain or increase the amount we give through our internal GHW programs in areas like global health R&D and animal welfare. The contrast between these two decisions arises from a couple of factors, as we described previously: On net, these changes leave our overall current spending plans across GHW and GCR within the range we’ve communicated publicly over the years, but after a lot of volatility. Much of our planning is focused on trying to make sure we keep our options open over the near term, and we expect to revisit these allocation decisions before the end of 2026. Here are a few concrete areas where we’ve learned lessons over the past year. When we launched our Global Public Health Policy (GPHP) program last year, we named lead exposure as one of four focus areas for the program, and we’re now planning to do significantly more work to combat lead exposure in LMICs in the coming years. Looking back, I think we should have done more sooner. Lead exposure is an enormous problem: one in two children living in low- and middle-income countries (LMICs) are estimated to have blood lead levels exceeding 5μg/dL (the point at which the World Health Organization recommends taking action to reduce exposure). We’d thought of lead exposure as a potential cause area since 2019 (when GiveWell made its first grant in the area), and we commissioned a report on the topic from Rethink Priorities that was published in 2021. But it took another two and a half years before we launched GPHP and decided to work on lead exposure in earnest. In the interim, we made only three lead-related grants. We missed several chances to move faster: In isolation, all of these decisions seemed reasonable. However, they added up to a painfully long delay between seeing the opportunity and taking action. In each case, we took a risk-averse approach, choosing to wait until we’d gathered more information. That’s an understandable mistake, but we think it’s a mistake nonetheless. In retrospect, especially at the later stages, we should have acted faster, prioritizing executing on the most cost-effective opportunities in front of us even if that meant risking some retrenchment later. There’s a general lesson for us here about not letting the perfect be the enemy of the good and taking more 80/20 steps to fill the most promising opportunities while we continue our investigations as needed. Especially as we get bigger and develop more processes and layers of management, it’s too easy for us to fall prey to mistakes of omission like this. We need to be careful to not let our own need for confidence prevent us from moving quickly and taking smart risks where warranted. Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford. We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private facilities. This was pitched to us at a time when FTX was making huge commitments to the GCR community, which made resources appear more abundant and lowered our own bar. Since its purchase, the space has gotten meaningful use for community events and gatherings. But with the collapse of FTX, our bar for this kind of work rose, and the original grant would no longer have risen to the level where we would want to provide funding. Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effective. We recently made that request; funds from the sale will be used for EV’s operating costs and other valuable projects they run. While this grant retroactively came in below our new bar, I don’t think that alone is a big problem. If you didn’t make some grants that look less attractive when the expected funding drops by half, you weren’t spending aggressively enough before. But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building. This is a tough balance to strike because I think it’s easy for organizations to be paralyzed by concerns over reputational risk, rendering them unable to make nearly any decisions. And I think a core part of our hits-based giving philosophy is being able to make major bets that can fail outright, even in embarrassing ways. I want to maintain that openness to risk when the upside justifies it. But this example has made me want to raise our bar for things that could end up looking profligate or irresponsible to the detriment of broader communities we’re associated with. We have a careful and intense hiring process that has served us well in many respects. In retrospect, though, I believe that we’ve hired too slowly for our work on the Global Catastrophic Risks (GCR) side of the organization. As a result: An obvious and important challenge to acknowledge here is that because global catastrophic risks have historically been so neglected by other funders, there isn’t as large a professional community to draw from as in some other areas where we work. Nonetheless, I think we can and should be doing more to hire more aggressively at all levels than we have been in the past. In terms of absolute staff numbers on the GCR side, we’ve already been investing more in hiring over the past year, and I think we’ll make more strong hires this year, hopefully “catching up” to our opportunity set from a basic capacity perspective. But I think the default path may be for those hires to skew too far towards relatively junior staff, so I’m planning to work with GCR leadership and our recruiting team to make sure that we’re both investing in more senior hiring and developing our existing staff for more senior responsibilities. We have some big plans for 2024. On the Global Health and Wellbeing side: On the Global Catastrophic Risks side: A couple of additional priority areas I foresee for myself: Finally, across the organization, we’re hiring. If you want to join our team, check out the open positions on our careers page. (At the time of publishing, we’re looking for a people operations leader and a finance operations coordinator.) If you don’t see something you want to apply for, you can fill out our general application, and we’ll reach out if we post a position we think might be a good fit. On that note, we’re always looking for referrals; if you refer someone and we hire them, we’ll pay you $5,000. Open Philanthropy staff shared and discussed their work across a variety of outlets this year. Here are some highlights: Global Health and Wellbeing Global Catastrophic Risks Open Philanthropy was also profiled as an organization by Inside Philanthropy, in a piece focused on our biosecurity and farm animal welfare programs. For more updates, see the “In the News” section of our website. FootnotesRevising our priorities
Some lessons learned
Moving faster on lead exposure
Wytham Abbey
Hiring more aggressively on the GCR side
Looking forward to the rest of 2024
Appendix: Publications and Media
1 Some examples of diverse views among our grantees:
2 Note that this post refers to “longtermist” funding. We changed the “Longtermism” portfolio name to “Global Catastrophic Risks” last year. The new name better reflects our view that AI risk and biorisk aren’t only “long-term” issues; we think that both could threaten the lives of many people in the near future.