Can AI feel distress? Inside a new framework to assess sentience
The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI Jonathan Birch Oxford Univ. Press (2024)
Can artificial intelligence (AI) feel distress? Do lobsters suffer in a pot as it reaches a boil? Can a 12-week-old fetus feel pain? Ignore these questions and we potentially sanction a quiet, slow-moving catastrophe. Answer in the affirmative too hastily, and peoples’ freedoms will shrink needlessly. What should we do?
Philosopher Jonathan Birch at the London School of Economics and Political Science might have an answer. In The Edge of Sentience, he develops a framework for protecting entities that might possess sentience — that is, a capacity for feeling good or bad. Moral philosophers and religions might disagree on why sentience matters, or how much it does. But in Birch’s determinedly pluralistic account, all perspectives converge on a duty to avoid gratuitous suffering. Most obviously, this duty is owed to fellow human beings. But there is no reason to think that it ought not to apply to other beings, provided that we can establish their sentience — be they farm animals, collections of cells, insects or robots.
Smarty plants? Controversial plant-intelligence studies explored in new book
The problem is how to establish whether something is sentient. The philosophical concept of sentience is riven with basic disagreements. So too is the science. Interpretation of experimental evidence varies and there is a lack of sustained investigation of sentient capacities for many beings, including juvenile animals and AI. Then, there is the problem of measurement. With mammals, patterns of behaviour and brain activity can provide a trace of bad feeling. But what is the sentience test for gastropods, which have different minds and repertoires of behaviour? What about AI systems, which don’t have brains or physical manifestations of feeling?
Confronted with this writhing tangle of uncertainty, the temptation is to crawl under a blanket and hope that the problems blow over. Birch is anti-blanket. He advocates a proactive precautionary approach that triggers careful and proportionate precautions at the first sign of a being’s sentience. Birch’s framework consists of two processes.
The sentience test
The first involves experts determining an entity’s prospects of being sentient. Demanding consensus would be unfair — it would potentially condemn beings to prolonged suffering, being orphaned by scientific ignorance or controversy. Instead, Birch proposes that “scientific meta-consensus” should trigger protections. By this, he means full agreement, even among the dubious, that sentience is at least a credible possibility, on the basis of evidence and a coherent line of theory. When there is a lack of meta-consensus, beings might be designated as priorities for investigation or dismissed as non-sentient.
Candidates for sentience would then advance to the second process, in which inclusive, informed citizen panels would devise protective policies. These should be proportionate to the risks of an entity’s sentience and account for different values and trade-offs. For example, imposing a moratorium in response to the potential sentience of a large language model (LLM) system might have huge opportunity costs for society. The citizen panels he advocates would revise their recommendations as evidence accumulates.
Next, Birch turns to three domains in which controversies challenge definitions of sentience. The first is the human brain — people with disorders of consciousness, fetuses, embryos and neural organoids, or synthetic models for brain systems. The second is non-human animals, including fish, molluscs, insects, worms and spiders. The third domain is AI, which includes LLMs.
Each section presents challenges that are unresolved and shot through with philosophical and scientific controversy. For example, how can precautions be devised for neural organoids, which show no outward behaviours? Here, Birch falls back on anatomical correlates of sentience, such as the presence of a functioning brain stem, and the presence of sleep–wake cycles. In the chapters on animals, we confront the dizzying number of species that could be sentient, the fact that so few have been studied and the question of how to extrapolate from them.
AI presents the challenge of devising tests for sentience that algorithms, or those designing them, can then learn about and ’game’. An LLM generates text about how it ‘feels’, not because it actually feels that way but because the algorithm is rewarded for mimicking sentience. Here, Birch warns against using behavioural markers to determine sentience and instead advocates a search for some kinds of “deep computational markers” of sentience.
Birch saturates his book with humility and vexation. And this earns Edge of Sentience licence to leave many questions unresolved. As the book draws to a close, several questions linger. The first concerns scope. Birch has cast a wide net, but why not wider? In 1995, then-US-president Bill Clinton described the United States as being “in a funk”. If countries can be said to have moods, can other collective entities — swarms of bees, corporations, nations — be similarly described, as if they possess sentience of a sort?
Protecting sentience
Another open question concerns the criteria for proportionate precautions. The citizen panels Birch proposes would be asked to make trade-offs between all sentient beings — including present and future ones — and sentience candidates. When it comes to determinations of proportionality, Birch is focused on process, not substance. But what makes a policy proportionate? Let’s leave aside the fact that humans do an abysmal job trading off our interests with those of known sentient beings, such as animals in factory farms. Do certain forms of sentience — say, the capacity for feeling bad but not good, or the intensity of that feeling — weigh more heavily than others? Does sentience count more when it is hitched to other attributes, such as intelligence?
Silicon Valley is cheerleading the prospect of human–AI hybrids — we should be worried
Regarding the last, Birch is careful to distinguish sentience from intelligence. In his account, the former is the wellspring of duties, not the latter. But might beings that are sentient and intelligent exert stronger demands for precautions than beings that are sentient but unintelligent? Birch is reluctant to play “philosopher as sage”, but good philosophy can help the public to structure discussions of proportionality and apply tests. This problem awaits further instalments.
Nonetheless, Edge of Sentience is a masterclass in public-facing philosophy. At each step, Birch is lucid and perfectly calibrated in the strength of his assertions. His analysis is thoughtful and circumspect, and always poised for revision. He elevates his readers. His sourcing is generous and wide-ranging. The book also takes pains to set itself up as a manual for policy, with each chapter providing a summary. Birch works hard and, in my opinion, succeeds in writing a highly topical book of deep philosophy. Any thinking person can profit from it, provided that they have a stomach for uncertainty.