This post originally appeared in the monthly farm animal welfare newsletter written by Lewis Bollard, program officer for farm animal welfare. Sign up here to receive an email each month with Lewis’ research and insights into farm animal advocacy. Note that the newsletter is not thoroughly vetted by other staff and does not necessarily represent consensus views of Open Philanthropy as a whole.
Artificial intelligence is getting smarter. AIs are now outperforming humans in image recognition and reading comprehension tests. Some AI experts predict that artificial general intelligence (AGI) — AI that can outdo humans in all cognitive tasks — may only be decades away.
What does this all mean for animals? To optimists, AI could soon develop a new generation of cheaper and tastier animal product alternatives, while enabling robots to tend to every need of the farm animals that remain. To pessimists, AI could make factory farmed meat even cheaper, largely by removing the last few constraints on how badly we can treat farm animals.
And that’s just the near-term. Optimists expect transformative AGI to spark an explosion in wealth, technology, and even our moral circle, rendering factory farming an archaic relic of the past. Pessimists counter that AGI may just lock in our existing moral biases against farm animals, and perhaps even create a vast new population of digital beings for us to abuse.
Who’s right? I don’t know. But I think we can learn a little from what’s happened so far and the track record of technological change. That leaves me pessimistic in the near term but a bit more optimistic in the longer run.
[Note: Open Philanthropy also works on reducing the existential risks that AGI may pose to humanity. This newsletter is unrelated to that work and doesn’t reflect the views of our staff working on AI safety and policy.]
Facial recognition for sheep. It’s unclear whether any farmers are actually using this, but it looks cool. Source: The Veterinary Health Innovation Engine.
Artificial expectations
Optimists see AIs revolutionizing alternative proteins. Climax Foods’ Deep Plant Intelligence Platform and NotCo’s Giuseppe Platform both claim to model countless permutations of plant-based ingredients to create products that best match the taste, mouthfeel, and aroma of animal proteins. As AIs improve, these products should too.
Others imagine more humane future farms, pointing to new AI applications that monitor animals’ needs through arrays of sensors. AudioT monitors chickens’ distress calls to warn farmers of diseases, crushing, and other problems. Smartguard uses sensors to prompt sows to move when they’re about to crush their piglets. And AI4Animals monitors the welfare of animals in Dutch slaughterhouses in real-time.
But it’s hard to tell how much of the AI-use in alternative proteins to date is about better products — or just better pitch decks for investors. AI works best with vast troves of data, which is mostly lacking for alternative protein ingredients. And, even where data exists, it’s normally hidden behind a thicket of patents and trade secrets erected by startups.
Plus, for every welfare-enhancing AI application, there are many more to boost factory farms’ productivity. Microsoft’s Smartfarm helped a shrimp producer to increase its output per acre by 50%, presumably by helping it to crowd shrimps more closely together. IBM’s Pig Scale optimizes pigs’ weights for slaughter. Digital phenotyping may allow chicken breeders to engineer their birds to grow even faster than their fragile bodies can support.
Nor is the history of agricultural innovation encouraging. The meat industry has quickly adopted efficiency-enhancing technologies, from antibiotics to mechanized slaughter. But it has largely shunned welfare-enhancing technologies, from immunocastration to in ovo sexing. Will an industry too stingy to install air conditioning or fire sprinklers really spring for AI sensors to help its animals?
ChatGPT is not an ethical consequentialist. Source: ChatGPT
Idle speculations about things that may never happen
The longer-term, though, may look very different. If we develop all-powerful AGI, the world as we know it could be upended. What might that mean for animals?
Optimists imagine a utopia of vast wealth, where the cruel efficiencies of factory farms are obsolete; vast technological progress, where high-tech alternative proteins beat out old-fashioned meat; and even vast moral progress, where rational AIs inspire us to apply our existing moral intuitions (don’t torture some animals) consistently (don’t torture all animals).
Pessimists imagine a dystopia where AGI enables us to export factory farming across the galaxy, AI systems “lock in” our current blinkered moral attitudes toward animals forever, and we exploit vast numbers of sentient AIs much as we have farm animals. (Think the prospect of AI sentience is crazy? In a recent survey of 166 consciousness researchers, most said that machines “probably” or “definitely” could have consciousness.)
I’m not sold on either vision. The world already has enough wealth to end factory farming many times over; we just choose not to. Nor is it clear that we’ll let AIs upend our inconsistent morals; current AI systems mostly just reflect them back at us (see above). But I also doubt we’ll export factory farms to space, both because it would be a cumbersome way to feed space colonies and because we may send digital beings who don’t need food at all. And I’m unsure if AGI will “lock in” anything, good or bad, so long as humans remain in control.
That leaves one major worry and one major opportunity. The worry is the plight of AIs if they ever become sentient. My guess is that we’ll probably treat them well if they present as human-like (think digital uploads of you and your friends) and poorly if they present as animal-like (think back-end cogs in a server farm).
The opportunity is what AGI could achieve for alternative proteins. AI can only make factory farming so much more efficient; the animals involved will always constrain things. By contrast, AIs starting with cells — of animals, plants, and fungi — could get much more creative, structuring the cells into tastier and more nutritious combinations than evolution alone could.
I don’t think this is how AI calibration actually works. But maybe it’s how the Golden Rule does?
Toward animal-inclusive AI
What can we do? Given the unpredictability of future AI advances, it’s hard to know. But here are a few ideas that seem promising:
Encourage AI makers to consider animals. All of the top AI labs have adopted missions, charters, or principles focused on helping humans: Open AI (ensure AGI “benefits all of humanity”); Anthropic (“ensure transformative AI helps people and society flourish”); Google DeepMind (“advance science and benefit humanity”); Meta (AI “should work equally well for all people”); Microsoft (“inclusive and respectful of human rights”); IBM (“the purpose of AI is to augment human intelligence”). None mention animals.
Advocates could push for those labs to make three specific changes. First, to adopt high-level principles to use AI to “increase the well-being of all sentient beings” in the words of the Montreal Declaration for the Responsible Development of AI. Second, to integrate animals into their model-tuning, for instance by instructing fine-tuning contractors to choose the model output that least harms animals. (Or, for AI makers taking a Constitutional AI approach, like Anthropic, to add animal welfare works to their canon of guiding texts.) Third, to add factory farms to the labs’ long lists of nefarious clients that they pledge not to do business with.
Such suggestions might meet a sympathetic audience. Open AI CEO Sam Altman tweeted last year: “Someday eating meat is going to get cancelled, and we are going to look back upon it with horror. We have a collective continual duty to find our way to better morals, and to allow some space for that process.”
Integrate animal ethics into AI ethics. These labs’ rules, and future regulations, will also be shaped by the burgeoning academic field of AI ethics. A recent paper on “speciesist bias in AI,” which I recommend, finds that “currently, AI ethics is mute about the impact of AI technologies on nonhuman animals.” For some purposes this may not matter: principles like safety, transparency, and robustness are species-agnostic. But for others, like fairness, it does: AI systems are being trained to be less sexist and racist, but not to be less biased against animals. More work on AI animal ethics could change that.
Center animals in our advocacy. So long as humans maintain control over AI — the goal of much AI safety and policy work — our values will determine its use. (And if AIs become sentient, our values toward animals may influence how we treat them too.) So perhaps the best thing we can do is to keep influencing those human values in the right direction. There’s no silver bullet for that; advocacy through social media, podcasts, news, books, videos, and other media probably all have their place. The most important thing is that we continue to raise the plight of animals — and seek to expand humanity’s moral circle to cover ever more sentient beings.