Note: this post discusses a number of technical and philosophical questions that might influence our overall grantmaking strategy. It is primarily aimed at researchers, and may be obscure to most of our audience.
We are dedicated to learning how to give as well as possible. Thus far, we’ve studied the history of philanthropy, adopted an overall approach we call “strategic cause selection,” chosen three criteria and used them to select some initial focus areas, embraced hits-based giving, and learned many notable lessons about effective giving. These and other judgment calls are subject to revision, but overall we feel reasonably happy about these big-picture choices and “lessons learned.”
However, we also feel that we have many other things left to learn about how to give as well as possible — not just about the details relevant to current and potential focus areas, but also about how we should think about certain “fundamental questions” that could greatly affect our overall approach to giving and our choice of focus areas.
This post briefly explains some of these fundamental questions, especially those which seem to us like they are relatively neglected, and also potentially tractable to scientists and scholars with relevant expertise and interest.1 At the end of each section, I list some example questions we’d like to see examined in some depth.
If readers know of helpful existing literature on these questions, or of ways that we could support further work, we’d appreciate their letting us know via the comment section or by contacting us directly.
1. Cross-cause cost-effectiveness comparisons
We now recommend grants in focus areas as diverse as criminal justice reform, farm animal welfare, and potential risks from advanced artificial intelligence. Given limited resources, how should we compare the expected value of a dollar spent in one of these areas vs. another?
GiveWell also faces such questions about its top charities, even when comparing between charities within a single cause (global health). For example, the Against Malaria Foundation primarily saves lives, while other top charities primarily improve individuals’ health, increase their income, or do a mix of both. How should one compare the value of saving lives, improving health, and increasing income?2
GiveWell’s current solution for comparing income increases to health improvements is to ask its staff members to make ethical judgments comparing the value of an increase in one’s income to the value of an extra year of healthy life.3 The median staff judgment is then used (along with other variables) to calculate final cost-effectiveness estimates.
Comparing health improvements or income increases to lives saved is arguably even more philosophically contentious. To compare improved health to lives saved, GiveWell asks its staff members to make ethical judgments comparing the value of an extra year of healthy life to the death of a young child. 4 The median staff judgment is then used to calculate final cost-effectiveness estimates. Meanwhile, to compare increased income to lives saved, GiveWell first converts income increases into equivalent health improvements (using the median answer to the previous prompt), and then converts health improvements to lives saved (using the median answer to the prompt about an extra year of healthy life vs. the death of a young child).
Now for an Open Philanthropy Project example. To compare the cost-effectiveness of a grant to the Center for Global Development (CGD) to the cost-effectiveness of marginal dollars spent on GiveWell top charity GiveDirectly,5 we modeled the expected impact of CGD’s work in terms of additional dollars sent to the global poor (as a result of CGD’s work),6 since GiveDirectly also transfers dollars to the global poor.
However, most cross-cause comparisons we’d like to make are less straightforward than this. In some strict sense, such diverse grantmaking opportunities may be incommensurable. But it remains the case that we have limited resources, and we see excellent giving opportunities across a wide range of causes. So, we want a solution for cross-cause cost-effectiveness comparisons that, while not perfect, is “good enough” to usefully guide our giving.
We suspect there is room to improve GiveWell’s current method for comparing health improvements, income increases, and lives saved. But there is perhaps even more room for improvements to our understanding of how to make comparisons across the Open Philanthropy Project’s diverse focus areas. Currently, our way of comparing opportunities across focus areas in very intuitive and heuristic, and our process for comparing opportunities across causes could likely benefit from improved rigor, and from stronger theoretical foundations.
Here are some example questions about cross-cause cost-effectiveness comparisons we’d like to see examined in more depth (either via original research, critical analysis of the often significant existing literature, or a mix):
- How should we think about the “disability paradox” (Schroeder 2016)?
- How should we think about age-weighting?
- How should we capture and reflect a range of ethical judgment calls when making cross-cause cost-effectiveness comparisons?
- One component of the cost-effectiveness analyses for some grants (e.g. in support of a campaign to close Rikers Island) includes the value of saving taxpayer dollars. Using a logarithmic model of income and subjective well-being (see e.g. Sacks et al. 2013), one might start with the assumption that dollars given to Americans are 100 times less valuable than dollars given to those with 1/100th of the income. However, there are many complications to this model one might examine, for example the fact that a small portion of the U.S. federal budget is spent on science, which benefits not just Americans but everyone.
- How convincing are the usual arguments for the logarithmic model of income and subjective well-being? Is there a better-justified and equally tractable model available?
- How should we compare the death or suffering of humans to the death or suffering of various non-human species? (This overlaps with questions about moral patienthood; see below.)
- In general, what are some plausible strategies for comparing the cost-effectiveness of small, hard-to-measure reductions in certain kinds of catastrophic risk to the cost-effectiveness of other kinds of grants? What are their pros and cons?
- We expect to make many grants funding scientific research. How should we think about the expected value of different kinds of scientific research? How have people estimated the long-term human benefit of past investments in scientific research? What kinds of new estimates could be conducted?
2. Making decisions under different kinds of uncertainty
Consider several different kinds of uncertainty:
- I’m 50% confident that a coin (which I know to be a fair coin) will land on heads.
- Based on his track record of showing up at parties like this, I’m 50% confident my friend Matt will show up at the party on Friday. I could also call Matt and ask him some questions to update my level of confidence.
- Based on many different kinds of evidence, plus my gut intuitions, I’m ≥10% confident that “transformative AI” will be created within 20 years. I don’t know what else I could do to significantly further narrow my confidence intervals on this question, though of course things beyond my control (e.g. a sudden acceleration or stall in the field’s progress as a whole) could cause me to update my level of confidence.
- I’m not clear what I mean by “consciousness,” and I barely know what kind of evidence I should think is relevant to the question of whether a certain animal is “conscious” (and therefore likely worthy of moral consideration), but I notice that if I was asked to bet whether it will turn out to be the case that chimpanzees are “conscious” in roughly the way I’m intuitively thinking about consciousness today, and I somehow knew the bet would be definitively resolved, I would take bets consistent with my believing there’s a greater than 85% chance that chimpanzees are “conscious.”
- Pascal’s Mugging: Suppose someone tells me that if I give them five dollars, they will use their magic powers (or their contacts with an advanced alien civilization) to benefit trillions of beings throughout the observable universe. Even if I assume that this claim is 99.99999% likely to be wrong, the expected value is still high. Can I really justify being 99.99999% confident the claim is wrong? It wouldn’t be the first time someone was grossly wrong about the basic structure and affordances of reality.
The theory of decision-making under uncertainty of type (1) — sometimes called “risk” — is well-understood and widely considered to be solved by expected value maximization. But is expected value maximization the right way to think about decision-making in the other cases? We’ve wrestled with this question before,7 but we don’t feel that we yet have a fully satisfactory answer.
Our question here is methodological rather than simply philosophical. Philosophically we are fairly comfortable with a broadly Bayesian framework, in which all relevant uncertainty can be captured by the right set of conditional probabilities. We worry, though, that the practical methodology of simply introspecting one’s subjective probabilities breaks down (and fails to capture all the relevant information in our heads) in situations such as (3) through (5) above.
We plan to write more about this in the future. For now, here are some example questions about decision-making under different kinds of uncertainty that we’d like to see examined in more depth:8
- What are some key dimensions along which uncertainty can vary? For example: expected stability of the estimate over time, ability of the estimator to narrow their confidence interval through their own action, whether the estimator has strong expectations about their own level of probability calibration on questions of the presently relevant reference class, degree of model uncertainty, and so on.
- How should we methodologically handle these different kinds of uncertainty, if at all?
- Is there a single set of principles that gives “sensible” answers” in all the cases above, including Pascal’s Mugging?
- Is there a reasonable methodology for making decisions given uncertainty about which decision theory is correct?
- How should we think about “cluelessness” (especially about the long-run consequences of options), and what can be done about it?9
3. Worldview diversification
As explained in an earlier blog post, our giving takes a “worldview diversification” approach, which means we put significant resources behind each worldview that we find highly plausible. For example, our work on farm animal welfare is premised on a worldview according to which at least some non-human animals are worthy of substantial moral consideration.
However, as described in that post, our current approach to worldview diversification is largely intuitive and fairly “rough,” and we welcome ideas for how to practice worldview diversification in a more principled, systematic way.
Here are some example questions about worldview diversification we’d like to see examined in more depth:
- How reasonable is our case for worldview diversification? What are the most important considerations that post does not consider?
- Is there a formal or semi-formal framework that would improve our (currently highly intuitive) method for implementing worldview diversification?10
- What is the most reasonable approach to dealing with moral uncertainty (MacAskill 2014)?
4. Philanthropic coordination theory
GiveWell has written previously about the “giver’s dilemma”:
Imagine that two donors, Alice and Bob, are both considering supporting a charity whose room for more funding is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob’s plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap. Conversely, if Bob finds out about Alice’s funding plans, his incentive is to give nothing to the charity and perhaps support another instead. This creates a problematic situation in which neither Alice nor Bob has the incentive to be honest with the other about his/her giving plans and preferences — and each has the incentive to try to wait out the other’s decision.
We’ve also discussed three types of approaches to the giver’s dilemma: “funging” approaches, “matching” approaches, and “splitting” approaches (for explanations, see here).
The best solution for philanthropic coordination can vary depending on several factors, for example whether we are trying to coordinate with a small number of other donors (e.g. one or two other foundations who work in one of our focus areas) or a large number of other donors (e.g. all those who give to GiveWell top charities each year). We have some tentative ideas about how to deal with these coordination issues in some cases, but we are not very confident our current ideas are best.
According to economist S. Nageeb Ali (see here), little to no academic research has addressed these issues of philanthropic coordination, and we agree with him that studying these issues further could be productive.
Here are some example questions about philanthropic coordination theory we’d like to see examined in more depth:
- If we are considering funding a project that we suspect another similarly-sized funder would also be willing to fund, what are some possible approaches we can take to negotiating with them, and what are the pros and cons of each approach? How does the optimal behavior change with the degree to which the other funder (a) is of similar / different size compared to us; (b) has similar / different values to us; and (c) is easier / harder for us to communicate with?
- How do these principles extend to times when we are dealing with a diffuse set of donors that we lack key information about (such as the total size of the group, value alignment, etc.)?
- We typically avoid situations in which we provide >50% of an organization’s funding, so as to avoid creating a situation in which an organization’s total funding is “fragile” as a result of being overly dependent on us. To avoid such situations, one approach we’ve sometimes taken is to fill the organization’s funding gap up to the point where we are matching all their other donors combined. But we have several concerns about this strategy. For example, does this strategy create highly problematic incentives for other donors? Does it lead to a situation in which some of the organization’s donors should “wait us out” to make the organization’s funding gap appear larger than it otherwise would, while others should “front-run us” to make our room for matching other donors seem larger?
5. Moral patienthood and moral weight
One of the key questions we ask when choosing focus areas or grants is: “Per dollar spent, how many individuals could our funding benefit, and how much might it benefit them?” However, this raises a further question: Which types of beings merit moral concern? Or, to phrase the question as some philosophers do, “Which things are ‘moral patients’,11 and what are the dimensions along which moral patients can be benefited?”12 (See also our blog post on radical empathy.)
To illustrate: our work on farm animal welfare is premised on the view that at least some animals are moral patients. But which animals are moral patients, and how should we weigh the death or suffering of one kind of animal against that of another?
I am currently preparing a report summarizing some of my early thinking on moral patienthood, but my initial findings do not come close to “settling” the issue to our satisfaction, and my report does not examine the further issue of “moral weight.” We think additional work on these questions could be valuable. Some conversations from this investigation have already been published, and provide some sense of how I am thinking about the relevant issues. My report on moral patienthood will also include a list of questions I’d like to see examined in further depth.