Become a Patron!Support our reviews, videos, and podcasts on Patreon!
Cool tools really work.
A cool tool can be any book, gadget, software, video, map, hardware, material, or website that is tried and true. All reviews on this site are written by readers who have actually used the tool and others like it. Items can be either old or new as long as they are wonderful. We post things we like and ignore the rest. Suggestions for tools much better than what is recommended here are always wanted.
Imagine a patient in a vegetative state who can actually feel everything happening to them but can't communicate it. Picture an octopus being boiled alive, experiencing every second of agony. Consider an AI system that might be developing genuine feelings while we treat it as just another tool. These are the ethical questions that keep Jonathan Birch up at night.
As a philosopher and ethicist at the London School of Economics, Birch has spent years grappling with one of science's most perplexing questions: how do we know if another being is conscious and capable of suffering? His book, The Edge of Sentience, argues that we've been asking the wrong question all along. Instead of demanding absolute proof of consciousness — which may be impossible to obtain — we should focus on identifying "sentience candidates" and taking practical steps to protect them from harm.
This isn't just academic theory. Birch's work has already influenced real-world policy — he led the team whose research convinced the UK government to legally recognize lobsters and octopuses as sentient beings. Now he's turning his attention to an even broader range of cases, from human patients with brain injuries to the possibility of conscious AI.
Here are four key insights from the book:
"Assume Sentient" When Lives Are at Stake
"A patient [with a prolonged disorder of consciousness] should not be assumed incapable of experience when an important clinical decision is made. All clinical decisions should consider the patient's best interests as comprehensively as possible, working on the precautionary assumption that there is a realistic possibility of valenced experience and a continuing interest in avoiding suffering and in achieving a state of well-being, but without taking this assumption to have implications regarding prognosis."
Look Beyond Brain Size and Intelligence
“Sentience is neither intelligence nor brain size. We should be aware of the possibility of decouplings between intelligence, brain size, and sentience in the animal kingdom. Precautions to safeguard animal welfare should be driven by markers of sentience, not by markers of intelligence or by brain size.”
On the Hidden Nature of Experience
“At least in principle, there can be phenomenal consciousness without valence: experiences that feel like something but feel neither bad nor good. It is not clear that humans can have such experiences (our overall conscious state arguably always contains an element of mood). But we can conceive of a being that has a subjective point of view on the world in which non-valenced states feature (it consciously experiences shapes, colours, sounds, odours, etc.) but in which everything is evaluatively neutral. Such a being would be technically non-sentient according to the definition we have been using, though it would be sentient in a broader sense. Would such a being have the same moral standing as a being with valenced experiences?”
On Future AI Risk
"As these models get larger and larger, we have no sense of the upper limit on the sophistication of the algorithms they could implicitly learn... The point at which this judgement shifts from correct to dangerously incorrect will be very hard for us to see. There is a real risk that we will continue to regard these systems as our tools and playthings long after they become sentient."