2024 marked 10 years since we launched Open Philanthropy. We spent our first decade learning (about grantmaking, cause selection, and the history of philanthropy), and growing our team and expertise to be able to effectively deploy billions of dollars from Good Ventures, our main funder. Our early grants — and some grantees we’ve helped get started — are now old enough that we can see material signs of our impact in the world.
The start of our second decade also marked a major change in our direction. With Good Ventures approaching the level of spending consistent with its founders’ ambition to spend down in their lifetimes, we finally began to execute at scale on our long-held ambition to support other funders, and found a surprising degree of early success. I expect that our ambition to serve additional partners will guide much of our second decade.
A few highlights from the year:
- We launched the Lead Exposure Action Fund (LEAF), a >$100 million collaborative fund to reduce lead exposure globally. LEAF marked our first major foray into partnering with other funders beyond Good Ventures, and we’re planning to do a lot more in this vein going forward — more below.
- Our longtime grantee David Baker won the Nobel Prize in Chemistry for his groundbreaking work using AI for protein design. We’re proud to have supported both the basic methods development and the potentially high-impact humanitarian applications of his work for ailments like syphilis, hepatitis C, snakebite, and malaria.
- Our grantee Open New York played an important role in the recent passage of New York City’s largest zoning overhaul in over 60 years. The city planning department expects the package to create 80,000 new homes over 15 years, making this the first set of major YIMBY reforms to pass in New York City.
- Research mentorship programs that we fund continue to produce some of the top technical talent in AI safety and security. Graduates of programs like MATS, the Astra Fellowship, LASR Labs, and ERA-AI have contributed to key safety areas like interpretability, evaluations, and loss of control. For instance, MATS now trains more than 100 aspiring AI safety researchers annually, some of whom rapidly contribute to the field: a recent graduate received “Best Paper” at one of the leading AI conferences.
- Our grantee, the Mirror Biology Dialogues Fund, brought attention to the unprecedented risks of creating mirror bacteria, working alongside a group of 30+ esteemed scientists (including two Nobel laureates). Their work was published in Science along with a 300-page technical report detailing the risks.
- We directed $87 million to GiveWell-recommended charities. We continue to think that these charities are among the highest-value uses of philanthropic money, and we are proud to support their work on malaria, vitamin A deficiency, childhood vaccination, and more.
For more examples of interesting work we supported, check out this blog post. The rest of this update:
- Offers brief updates on grantmaking from each of our programs. [More]
- Reflects on a few themes from the year including:
- Looks forward to the rest of 2025. [More]
As always, I welcome feedback. You can find me on Twitter/X or Bluesky, or email us at info@openphilanthropy.org.
1. Program updates
The diverse areas we work in may not appear to have a unifying theme, but we chose them all through the same process. To achieve our mission of helping others as much as possible with the resources available to us, we seek out causes that are:
- Important — They have a substantial impact across a large number of individuals.
- Neglected — They get little attention from others, especially other philanthropists, relative to their importance.
- Tractable — They offer clear opportunities for us to support progress.
Sometimes, we are one funder among many in an area that is well-understood and well-funded, but big enough that we still find many strong opportunities. Other times, we enter an area that receives virtually no funding, which can be a sign of great promise but also great uncertainty. We’re open to both types of grantmaking and currently pursue a mix of funding strategies.
This approach guided the hundreds of grants we made last year across our focus areas.
Here are a few updates from our more established programs:
Global Health and Development: Our largest grant of the year supported expanded access to clean drinking water. GiveWell’s review of a recent meta-analysis found that water chlorination reduces all-cause mortality in children under five by 12%, making clean water interventions highly cost-effective in some locations. Other significant grants in this program, also recommended by GiveWell, targeted issues like malaria, malnutrition, and deworming.
Science and Global Health R&D:[1]These two focus areas will soon be merged. We made significant grants this year to address issues that often go untreated in the developing world, from snakebite to sickle cell disease (SCD). Recent IHME research suggests that sickle cell disease could be responsible for more than 350,000 deaths annually, 11x higher than previously estimated. With our support, the Clinton Health Access Initiative is coordinating market shaping initiatives in Nigeria, India, and Ghana, with the goal of lowering prices and expanding access to sickle cell diagnostics and treatments. These represented new areas of interest for us, while our work on treating communicable diseases continued. Our science team helped prevent the shutdown of VEuPath, a critical bioinformatics resource that accelerates vaccine and drug development, and funded an ambitious collaboration for the preclinical development of a new tuberculosis vaccine.
Farm Animal Welfare: Corporate campaigns continue to occupy a large part of our work in this space. Our grantee the Open Wing Alliance (OWA) secured 141 new cage-free commitments that will benefit egg-laying hens, as well as 13 commitments to improve the lives of broiler chickens. We’re particularly excited about initiatives by OWA member organizations in the U.S., Southeast Asia, and Latin America. We also supported projects to improve the welfare of wild-caught fish, as well as work to promote the alternative protein space.
Innovation Policy: with a goal to safely accelerate growth and innovation, the team supported high-skilled immigration and training for DARPA-like research programs, and funded experts to produce living literature reviews. Some living literature reviews tie into our other focus areas: for instance, Tom Gebhart’s Some Are Useful covers how ideas from machine learning and artificial intelligence are being used across the sciences (one focus of our 2024 RFP on the real-world impacts of LLMs).
Other Global Health and Wellbeing (GHW) programs: While most of the Global Public Health Policy team’s efforts this year were dedicated to launching the Lead Exposure Action Fund, the team also continued to support efforts to restrict access to deadly pesticides; recent research continues to suggest that these restrictions are effective in preventing suicide. The Global Aid Policy team built on its 2023 success in Japan and Korea by supporting local groups in their aid advocacy efforts. Our Effective Altruism (GHW) program supported organizations that raise leveraged funding for effective charities (like GiveWell’s recommendations), including ones in Spain and Norway, as well as groups aiming to provide high-impact career advice. Charity Entrepreneurship also continues to incubate exciting nonprofit organizations — the Lead Exposure Elimination Project was one of LEAF’s first grantees, and is a key player in efforts to reduce lead exposure.
Potential Risks from Advanced AI: Our technical AI safety team spent roughly $25 million on projects to develop better benchmarks for the capabilities of LLM agents, some of which are already being used by the U.S. and UK government, OpenAI, and Anthropic to help measure whether AIs can assist with cyberattacks and the creation of pandemics. In addition to ongoing work on interpretability and alignment, grantees contributed to early-stage research on new techniques for making AI systems more safe and secure, while also stress-testing whether current safety techniques sufficiently address known risks.
Meanwhile, our AI governance and policy team launched its first request for proposals, covering six subject areas, and continued to support a broad ecosystem of work on national policy, effective industry self-governance, international coordination, trend-tracking, strategic analysis, and talent development. We were proud of the increasingly widespread recognition of our grantee Epoch AI and their work to track and forecast AI developments, which were praised in The New York Times for bringing “much-needed rigor and empiricism to an industry that often runs on hype and vibes.”
Global Catastrophic Risks Capacity Building: Our GCRCB team funds organizations that connect and support people who want to work on reducing global catastrophic risks. In the education space, Bluedot Impact, a nonprofit that provides online courses on the potential transformative impacts of AI, was shortlisted for both education awards at the UK’s National AI Awards in 2024. Other work this year covered fundraising, career coaching, media production, career transition support, and research mentorship.
As the AI safety field matures, there’s an increasing range of opportunities for grantees to contribute. For instance, as part of Arcadia Impact‘s AI Safety Engineering Taskforce, a team of expert software engineers is contributing to the UK AI Security Institute’s (UKAISI) work by designing novel AI evaluations (‘evals’) and building infrastructure for UKAISI’s platform for community-contributed open source evals. Reliable evals are a crucial tool for understanding AI advancements.
The team also launched two RFPs: one for work that builds capacity to address risks from transformative AI, and another for programs and events for GCR-related work. If these opportunities (or other opportunities the GCRCB team offers) appeal to you, consider applying!
Biosecurity & Pandemic Preparedness: We made grants to think tanks and nonprofits to support work on pandemic prevention and mitigation, as well as continued research on catastrophic biosecurity risks. As noted above, our grantees at the Mirror Biology Dialogues Fund worked with leading researchers to draw scientific attention to mirror bacteria, which could pose an unprecedented biological threat if created.
This year, Open Philanthropy also launched a new Forecasting program. We’ve often found forecasting to be a useful tool for our own decision-making; our grantmaking staff practice making and tracking predictions about core outcomes for grants they approve. In particular, we think forecasting techniques may be useful in measuring and helping reduce potential global catastrophic risks, where other signals of progress can be scarce. Early grants in the area have supported the larger forecasting ecosystem, as well as projects to improve forecasting within the AI safety space. The team has also lent assistance to our Technical AI Safety team in making grants to support the development of LLM forecasting benchmarks.
We also launched a search for a program officer to lead a new LMIC Growth program. We’ve long thought that economic growth is central to improving the well-being of those living in extreme poverty, and we’ve been curious about this space since before Open Philanthropy was launched, but we were skeptical about the prospects for concrete philanthropic opportunities to help on the margin. Over the last few years, though, we made a set of exploratory grants aiming to support growth in particular contexts, and early positive signs from those grants led us to want to double down. We’ll be announcing our new program officer in the coming weeks; they will direct at least $30 million in grants over the next three years.
I also wanted to share a brief update on our $150 million Regranting Challenge, a 2022 initiative to support other grantmakers in tackling projects and ideas outside of our program areas. I’m particularly excited about the work we’re supporting via the Gates Foundation to advance a second new TB vaccine candidate. The Phase IIb trial we’re supporting for MTBVAC, one of the most promising TB candidates, began administering doses last month. If successful, MTBVAC could potentially be the first effective TB vaccine scaled up for adults; the current standard (and >100-year-old) Bacille Calmette-Guérin vaccine only provides partial protection for children.
On the negative side, the recent turmoil at USAID has halted the work of Development Innovation Ventures (DIV), our single largest grantee through the challenge. DIV works to identify and scale up cost-effective global health and development interventions; it has an exceptional track record, and we have high confidence in its leadership team. We’re actively working to ensure DIV’s lifesaving efforts can continue in some form.
2. Thematic updates from 2024
2.1 Appointing a new leadership team
Over the course of 2024, Open Philanthropy expanded from a team of just over 100 to nearly 150. As we’ve grown, so has the team that leads our work:
- Emily Oehlsen was promoted to President.[2] Cari Tuna’s title changed to Chair; her relationship to and responsibilities at Open Philanthropy will remain the same. She took on oversight of our Biosecurity & Pandemic Preparedness program in mid-2024 and is adding oversight of our AI work this month, while continuing to oversee our Science and Global Health R&D programs. She joined Open Philanthropy in April 2021 and most recently led our GHW portfolio. In that role, she oversaw the allocation of more than $900 million in funding focused on scientific research, policy advocacy, global development, and phasing out the worst factory farming practices, in addition to helping launch LEAF. I’ve been consistently impressed with her intelligence, dedication, and judgment, and I couldn’t be more excited to have her as Open Philanthropy’s second-in-command.
- Otis Reid was promoted to Managing Director for Global Health and Wellbeing. While leading the GHW Cause Prioritization team, Otis has been a trusted advisor to Emily and me on core GHW decisions. Along the way, he has personally contributed to research that resulted in dedicated programs on global aid policy, lead exposure, and economic growth in low- and middle-income countries. Emily and I are confident that the GHW portfolio is in very capable hands with Otis.
- Derek Hopf was promoted to Managing Director overseeing all of Operations. Derek has been with Open Philanthropy since 2017, most recently as Director of Grants Management. He brings unparalleled knowledge of Open Philanthropy and its operating systems, remarkably high output, and a commitment to learning and excellence.
- Naina Bajekal joined Open Philanthropy in August 2024 as our inaugural Director of Communications after working in journalism for a decade, most recently as Executive Editor at TIME. Six months in, I’m confident we made the right decision, and excited about the team Naina is building.
- Liz Givens, our inaugural Director of Partnerships, joined in December 2024 after two decades of building social impact fundraising teams and working closely with philanthropists, most recently as Chief Development Officer at Tipping Point Community. Building our external partnerships is a key priority in the years to come, and Liz is hitting the ground running.
Except for Emily and me, this group is entirely different from the leadership team in last year’s post. That has meant a lot of change for our team internally to manage, and for part of 2024 a number of interim leaders at once. And I’m fairly new to running all of Open Philanthropy, so these transitions have prompted some reflections and lessons for me on both the hiring and management sides. I’m excited about the new leadership team and think we’re set up for even more success going forward.
2.2 Building new partnerships
Open Philanthropy is independent from Good Ventures in large part because we have always had long-term ambitions to work with many donors. We spent our first decade laying the groundwork for Good Ventures to scale its giving to meet its founders’ goal of spending down in their lifetimes. With that work on track, and drawing on all the infrastructure and expertise we’ve built to be able to deliver impact at scale, in 2024 we began work to support other donors in earnest (though we had run various experiments earlier).
Our work with other funders takes several forms:
- Launching and managing formal pooled funds in areas that interest multiple funders. In 2024, we launched our first pooled fund, the Lead Exposure Action Fund (LEAF), which has raised $104 million to allocate over four years to fight lead poisoning globally. LEAF more than doubles the previous baseline level of philanthropic support devoted to fighting lead poisoning in low- and middle-income countries. (In spite of a higher estimated mortality burden, lead previously had >200x less philanthropic and aid funding devoted to it than malaria, which is itself a neglected disease.) I’m grateful for the trust that Alpha Epsilon Fund, the Gates Foundation, the ELMA Foundation, the Livelihood Impact Fund, Lucy Southworth, the 10x Better Foundation, Good Ventures, and other supporters are putting in us to deliver progress against lead poisoning. We’re planning to launch another pooled fund on a different topic in the coming weeks, and may try to pull together a third later in 2025. While these funds require more upfront work to define our investment thesis and recruit partners, they do have significant benefits. Program officers are empowered to make fast decisions, and the funds can act as a “one-stop shop” for grantees who might otherwise have to cobble together funding from many different sources. Participating funders get to draw on shared infrastructure and expertise, and have the opportunity to learn together.
- Individual funders adding flexible funding to a specific Open Philanthropy program they’re excited about. For instance, the Livelihood Impact Fund has agreed to contribute funding to expand our new LMIC growth program. This approach has many of the same costs and benefits of pooled funds, though at lower magnitudes. We’re less likely to mobilize a large amount of funding this way, but the process is lighter/faster for both us and external funding partners.
- Bespoke advice on specific grant opportunities across one or multiple causes a funder is interested in. Individual Open Philanthropy program officers have done this for other donors on an ad hoc basis for many years, but we’ve invested in scaling and systematizing these efforts over the past year, and have recommended grants totaling tens of millions of dollars across dozens of partner donors and nearly our full range of causes. This process is very flexible for partners, though it requires us to invest in translation and matching for individual opportunities. Overall, we believe that this is time well spent — connecting other donors to outstanding opportunities increases both available funding for high-impact organizations and the diversity of donors these organizations can access.
- Advising philanthropists — typically new ones — on strategic cause selection. This can take a range of forms, from quick initial conversations to deep ongoing engagement. This makes up less of our day-to-day partnerships work currently, since nearly all donors we work with have fairly well-defined interests. But we think a lot of value from our future partnerships might come from this type of advising.
If you are a funder interested in collaborating on any of the above, please reach out! We typically advise partners who are looking for recommendations on $1 million or more in annual donations and are interested in strategic cause selection or one or more of our focus areas.
Cumulatively, these efforts meant that funders beyond Good Ventures accounted for a sizable minority (~15%) of the funds we raised or directed in 2024. One implication of our growing work with other donors is that it’s increasingly incorrect to think about Open Philanthropy as a single unified funder making top-down decisions:
- Increasingly, our resources come from different partners who are devoted to different causes and have different preferences and limitations for their giving. Their philanthropic dollars are not fungible, and we would be doing them a disservice if we treated them as if they were. We don’t yet have clear principles on how we handle potential risks of fungibility beyond being upfront with donors about the risks and guardrails in specific cases. But it’s clearly less true than in the past (not that it was ever perfectly true) that the distribution of grants we advise across causes reflects our leadership’s unconstrained recommendations.
- This also means that our grants database offers an increasingly inaccurate picture of our overall work. (Our database generally does not include funding we advise from non-GV donors, since we don’t want to take undue credit for their work or cause confusion with GV grants.) Since our current database is almost totally redundant with Good Ventures’ database, we’re thinking about potential changes and may deprecate it.
I’m very excited to have Liz on board to oversee and expand our partnerships work across our cause areas. Collaborating more with other donors is a top priority for us in 2025 — I’m grateful to the growing team working on this and excited to see what they will accomplish.
2.3 Tracking continued rapid progress in AI
Open Philanthropy has been supporting work on AI safety, security, and governance since 2015, when few other philanthropists were focused on these issues. In 2016, my co-founder Holden Karnofsky wrote:
[AI] is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science… I believe there is a nontrivial likelihood (at least 10% with moderate robustness…) that transformative AI will be developed within the next 20 years… By and large, I expect the consequences of this progress… to be positive. However, I also perceive risks. Transformative AI could be a very powerful technology, with potentially globally catastrophic consequences if it is misused or if there is a major accident involving it.
I think that early take, and our associated investments in field-building, seem very prescient in hindsight.
In 2024, progress in frontier AI models continued to move rapidly, with new benchmarks to measure AI capabilities regularly being made obsolete shortly after release. With this progress has come new risks: there is emerging evidence that leading AI models sometimes take strategic actions to avoid being turned off or retrained, and in the past month, OpenAI and Anthropic have said their models are approaching risk thresholds for aiding in biological weapon design.
Despite these developments, 2024 saw less policy appetite for AI safety and security than 2023. For example, after the California legislature passed a bill to protect AI company whistleblowers and require developers to conduct risk assessments before deployment, the governor vetoed it. As another example, the U.S. Senate’s much-discussed 2023 AI Insight Forums and SAFE Innovation Framework didn’t lead to any meaningful 2024 AI legislation. The year also saw the emergence of a more explicit, well-organized, and vocal lobby opposed to AI safety regulation. This trend has continued into 2025 — for example, France’s follow-up to AI Safety Summits in London and Seoul was renamed the AI Action Summit and sidelined safety discussions.
It’s hard to say how much of this opposition was structurally inevitable. Some opposition seems readily explainable given the increasing economic and strategic stakes of AI investments and the lack of large public harms during a period when safety concerns became increasingly publicly salient, but I didn’t predict this degree of countermobilization against safety in advance. Regardless, with the increased stakes and salience on both sides, debates on AI safety became increasingly polarized and adversarial across the board. I wonder whether there were avoidable oversteps from safety advocates that may have contributed to the recent backlash, and whether the AI safety community could have realistically been better prepared for the more adversarial environment.[3]For instance, at least some thoughtful and influential opponents of AI safety regulation seem to have been mobilized by the original version of the proposed California legislation. And I personally think opposition to safety efforts from open source advocates has been costly and perhaps could have … Continue reading
While progress on the policy front in 2024 was disappointing, technical AI safety research has been increasingly fruitful. Thanks in part to the explosion in LLM capabilities, we’ve begun to see some suggestive (but not conclusive) evidence for long-predicted risks from more advanced AI capabilities, such as emergent self-preservation or exploiting limitations of human oversight. We expect the returns to technical AI safety research to continue growing over the next few years as frontier AIs become capable enough for researchers to empirically test specific predictions of risks from transformative AI, and iterate on countermeasures against the risks that arise. In response, we’ve built out a technical AI safety team under Peter Favaloro to make grants to accelerate this work. We have an open RFP right now that outlines the research directions we see as most promising and aims to allocate at least $40 million, and potentially significantly more, to technical AI safety efforts over the next few months.
Ajeya Cotra previously managed our grantmaking in technical AI safety research, but decided in 2024 to focus more on tracking AI capabilities and planning for the possibility that transformative AI is developed in the next few years. (Tracking and forecasting AI progress is another area where we think philanthropically-funded researchers can make a big difference.) Peter took over this portfolio in late 2024, after a failed search for a senior external hire with extensive technical AI safety experience. All in, we committed roughly $50 million to technical AI safety research projects in 2024.
In retrospect, I think that rate of spending was too slow, and we should have been more aggressively expanding our support for technical AI safety work earlier; we’re now playing catchup. The key reasons for not prioritizing this higher earlier were a mix of difficulty making qualified senior hires and disappointment with the returns to our past technical AI safety spending. On the latter point, I think we didn’t update fast enough on how AI progress was changing the potential returns to technical work. (Of course, it remains to be seen how this scale-up effort will play out.)
Another place where we’ve probably been too slow to hire dedicated staff is around AI information security (e.g., investing in efforts to develop better cybersecurity to prevent AI model weights from being stolen). This work hasn’t had a clear internal owner in recent years, despite our early recognition of its importance. We have now made a number of grants to promote AI information security and are planning to put more dedicated effort into this space going forward.
Similar to Holden back in 2016, I expect the most likely outcomes from progress in AI to be profoundly positive. But we continue to think that potential catastrophic risks are worth paying attention to and investing to prevent. As AI progress has accelerated and timelines to transformative AI have seemingly shortened, a central (and open) question for Open Philanthropy is whether we should advise Good Ventures to spend significantly faster to mitigate potential risks. A major — but by no means the only — input to that decision is whether we can find opportunities that can cost-effectively (and rapidly) absorb the available capital. We have our work cut out for us.
3. Looking forward to 2025
The bulk of our staff time goes to researching and recommending grants, and I expect that to continue in 2025. A few questions I’ll personally be focusing on:
- What initial areas of investment does our new LMIC growth program officer identify as being most promising?
- What happens with USAID and how should our Global Aid Policy program react over the medium term?
- How quickly are advanced AI systems being deployed in real-world contexts, and how much can we scale effective investments in safety and security?
In addition to our ongoing grantmaking, our other top priorities for 2025 focus on continuing to improve as an organization:
- With so many recent changes on Open Philanthropy’s leadership team, we still have a number of “interim” team leaders lower in the organization, and some places where we haven’t yet written up clear strategies. This year, we’re aiming for all our division heads to have strong permanent team leads in place below them, and to invest in updated program, division, and org-wide strategies, the last of which will be a priority for me personally.
- On the partnerships side, we want to continue to expand our work while ensuring that all donors we work with have a high-quality giving experience. We’re planning to significantly increase our partnerships staffing so we have more bandwidth to support (and deliver on that high-quality experience for) other donors. I’m excited about the possibility of developing more pooled funds in areas that could particularly benefit from more support and might have latent donor appetite.
- On the communications side, we’re also going to be substantially expanding our staffing, with two new Senior Communications Officers and a Director of Government Relations likely to start in the next month or so. We’re currently revising our communications strategy, but directionally think we should be doing more to tell our own story as well as supporting grantees with effective communications, and we’re excited to have more bandwidth on the team to enable that.
Finally, across the team, we’re hiring. (At the time of publishing, we’re looking for operations team members, a Head of People, and a Chief of Staff to Andrew Snyder-Beattie on our Biosecurity & Pandemic Preparedness team.) If you don’t see something you want to apply for, you can fill out our general application, and we’ll reach out if we post a position we think might be a good fit. We’re always looking for referrals; if you refer someone and we hire them, we’ll pay you $5,000.
4. Appendix: Publications and media
4.1 OP team coverage
- Open Philanthropy President Cari Tuna was featured in a Vanity Fair article about six women making strides in philanthropy.
- Senior Advisor Ajeya Cotra participated in an AI task force at the DealBook Summit in December, moderated by New York Times tech columnist Kevin Roose. Coverage of the Summit here.
- Research Fellow Oliver Kim appeared on a podcast by The Atlantic to discuss Taiwan and economic growth. He also launched a blog about economic development.
- Lewis Bollard, who leads our farm animal welfare work, appeared on The Truth Podcast hosted by Vivek Ramaswamy to discuss factory farming and the conservative case for animal rights.
- Senior Research Analyst Joe Carlsmith published an essay about what it would mean to “solve” the AI alignment problem, and appeared on the Dwarkesh Podcast to discuss a different essay, “Otherness and control in the age of AGI.”
- Inside Philanthropy profiled the philanthropic giving of Cari Tuna and Dustin Moskovitz, Open Philanthropy’s main funders.
- Former USAID Administrator Samantha Power and I co-wrote a Washington Post op-ed about the launch of the Partnership for a Lead-Free Future. The launch was covered by a number of publications including The New York Times, Vox, NPR, and Bloomberg.
- Research Fellow Luca Righetti guest-published two posts on Planned Obsolescence, a blog run by Ajeya Cotra and Vox journalist Kelsey Piper: “Dangerous capability tests should be harder,” and “OpenAI’s CBRN tests seem unclear.”
- Matt Clancy, who leads our work on innovation policy, recently appeared on the 80,000 Hours podcast, the Macroscience Podcast, and The Entrepreneur’s Ethic. He continues to run his popular blog, New Things Under the Sun.
- Emily Oehlsen, who at that point led our Global Health and Wellbeing work before being promoted to President, published an article in the Journal of Economic Perspectives about philanthropic cause prioritization. She also appeared on VoxDevTalks to discuss Open Philanthropy’s approach to cause prioritization.
- GHW Program Director Jacob Trefethen started a blog covering scientific research and global health R&D, and GHW Chief of Staff Deena Mousa launched a newsletter covering global health and development.
4.2 Grantee coverage
- The New York Times covered snakebite therapeutics research by professors at the Liverpool School of Tropical Medicine.
- In his reporting on lead exposure for The New York Times, Nicholas Kristof praised the work of Pure Earth, one of LEAF’s first grantees.
- The Times reported on research by Dr. Michael Fischbach to develop vaccines that can be applied as a cream rather than administered through needles.
- Devex reported on the establishment of the EPIC Air Quality Fund. The fund aims to expand access to air quality data by installing air quality monitors and providing communities with the resulting information.
- Epoch AI was recently featured on The New York Times’ “2024 Good Tech Awards” list.
- Good Ventures grantee Dr. Joe Arboleda-Velásquez appeared on NPR to discuss his research on how a specific APOE3 genetic variant can delay Alzheimer’s onset, potentially leading to new dementia treatment approaches.
- In December, the Mirror Biology Dialogues Fund worked with prominent experts on a Science article (and accompanying technical report) about the potential dangers of mirror life. The article was covered in publications like The New York Times, The Guardian, and Scientific American.
- Open New York helped to pass “City of Yes for Housing Opportunity,” New York City’s largest zoning overhaul in over 60 years. The city planning department expects City of Yes to create 80,000 new homes over 15 years.
- The New York Times covered the Egg-Tech Prize to develop highly accurate and commercially viable in-ovo sexing technology. We co-funded the prize alongside the Foundation for Food & Agriculture Research.
- More than a dozen grantees were featured on Vox’s “Future Perfect 50” list.
For more updates, see the “In the News” section of our website.
Footnotes
1 | These two focus areas will soon be merged. |
---|---|
2 | Cari Tuna’s title changed to Chair; her relationship to and responsibilities at Open Philanthropy will remain the same. |
3 | For instance, at least some thoughtful and influential opponents of AI safety regulation seem to have been mobilized by the original version of the proposed California legislation. And I personally think opposition to safety efforts from open source advocates has been costly and perhaps could have been avoided with alternative approaches from safety advocates given the benefits that open weight models have delivered for safety to date. |