Philosophers and activists argue that fish and shrimp experience enormous amounts of agonizing pain, but the science behind their claims is far from convincing.
I have a bunch of things to say about the subject, but let me note just a few:
1) You should be highly uncertain about the subject. When lots of smart people disagree on a subject that you're not an expert on, even if you've done a decent amount of reading about it, you should basically never be more than 80% confident, especially if your view is the minority view. But the case for shrimp welfare goes through even if you're 95% sure shrimp aren't conscious, just because of the insane degree of the carnage and effectiveness.
2) I think we're largely in the dark about what ingredients are needed for consciousness. If this is right, then Key's speculation by analogy about the ingredients needed not being present in fish is highly improbable. Behavior is all we have to go off of. And shrimp behave in lots of ways that we'd expect them to if they felt pain (see my piece).
3) The sorts of things slime molds do are very rudimentary compared to what shrimp do--relevantly like a computer or mouse trap with many settings rather than a conscious being.
4) I do think you misrepresented me a bit. I'm now super confident in shrimp pain--I'd put it in the low 60s.
5) I couldn't find a source for the claim that shrimp eat their own body parts. I couldn't even find people using the word autophagy to mean eating one's own body parts https://en.wikipedia.org/wiki/Autophagy
6) The Key view implies octopi aren't conscious, which I take to be a decisive counterexample.
To 1), I'll restate what I said to Glenn above. The principal disagreements in this debate are not over complicated technical questions in neuroscience that you would need a PhD in the area to evaluate. They're fundamentally philosophical disagreements over questions like: Are observations of behavior or neurobiology the best way to determine whether an animal is conscious? And I believe a reasonably informed observer can pass judgment on these questions.
2) I fundamentally disagree with the idea that consciousness is still very mysterious and that we haven't (and possibly can't ever) make major progress in understanding consciousness. Ultimately, it's going to be explained by simpler biological processes and Key has provided an enormous body of empirical evidence supporting his theory that signal amplification (for example) is a critical process supporting consciousness. (His ideas also make intuitive sense - your body has billions of sense receptors and for any one of those signals to be felt as pain, the signal must be amplified at some point in your brain.)
I don't see anyone even trying to dispute this - most of the disagreement is, as I said, appeals to mystery (maybe there's some unknown part of the fish brain that can do this, like the optic tectum or brain stem) or appeals to behavior (fish act like they're conscious so they must be conscious and we just haven't figured out how their consciousness works yet.)
3) I'm not sure the relevance of this. Yes, slime molds have very simple behaviors relative to shrimp. Shrimp have very simple behaviors relative to humans. None of this tells us where we can fairly draw the line with respect to consciousness.
4) if you can point me to a line you think I misrepresented you on, I'll correct it.
and search for "autophagy", also just google "autophagy shrimp" and studies will come up.
6) I don't see how that is the case. Do we know octopi lack the neurobiology to do the tasks Key describes? My understanding is that the octopi neurology is considerably more complex than that of fish.
Beyond that, this seems to assume that octopi behavior is just so complex it can't be unconscious, but as other skeptics have pointed out, even highly complex human behavior can be undertaken unconsciously. We need more than that to determine whether something is conscious.
1) It seems wildly irrational to be sufficiently confident in one's views on an issue as difficult as figuring out which creatures are conscious that you don't take fish welfare at all seriously! This is a tough issue and the arguments don't strike me as anything like knock-down.
2) I didn't say that we haven't and can't ever make major progress. All I said was that an argument from "here is a tendentious account of which brain region gives rise to consciousness in humans, therefore something like it is needed for consciousness in animals," will not establish the kind of high confidence needed. Just seems nuts!
3) But they don't have any behaviors that seem to indicate specifically consciousness. For instance, they don't respond to anesthetic in the ways that shrimp and fish do (something I take to be strong evidence--if shrimp didn't feel pain, it's unlikely they'd respond to anesthetic in the same way a creature feeling pain would).
5) I couldn't find anything serious about this other than a random assertion from Key somewhere.
6) Key has an entire paper about this! https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2018.01027/full. Suffice it to say, just as I think if your theory implies dogs aren't conscious, that's a big problem, a view that implies that octopi aren't conscious is just nuts! This is obvious if you simply see octopi--they're like little babies!
Perhaps this is the wrong place to address a question to Bentham, but on point 1) - can you point to an argument about cumulative suffering vs peak suffering? I see the argument about caution, but it seems we can be confident that shrimp pain is low and I’m wondering if say 5x quantity of pain at 1/5 intensivity really adds up to 1? I would happily be hurt thousands of times at a barely perceptiable rate than once at a high rate. Just curious to pull this apart!
It's simply the case that consciousness is still at least somewhat mysterious, though. Even the easy problems, and the neural correlates, are the subject of significant disagreement amongst experts. And BB didn't claim that we "haven't made major progress", let alone that we can't ever, so that seems like a bit of a straw man and/or motte and bailey. The question isn't whether we have made progress, but whether we *know* that consciousness requires a particular neurobiological feature. And we simply do not know that, even if Key makes a convincing argument (I haven't read it in full yet); if only for outside view reasons, we cannot ever be too confident that we as laymen are correct and a majority of academic experts are incorrect (even if you do identify a plausible way academics could become misguided on this, in terms of social pressure etc.). No matter how convincing we find the argument, it's just not rational to be certain that we know better than most domain experts; we have to retain at least some level of scepticism.
Your post has caused me to lower my confidence in fish feeling pain, for sure, which I did think was basically settled. But it raises alarm, and prevents me from updating as much as I might otherwise, that you are expressing such clearly unjustified overconfidence about the causal mechanisms underlying consciousness. In much the same way that you rightly suggest we should lower our credence in response to advocates eg ignoring failures to replicate (and thank you for pointing this out- I am very disappointed to have been misled on that front), I can't help but adjust mine in your argument when you seem to be circumventing a key objection with apparently indefensible certainty about a pivotal crux of disagreement.
>> And BB didn't claim that we "haven't made major progress", let alone that we can't ever, so that seems like a bit of a straw man and/or motte and bailey. The question isn't whether we have made progress, but whether we *know* that consciousness requires a particular neurobiological feature.
What progress, though? Is it that consciousness is in some way related to brains? Even the Ancient Greeks were aware of that relation. Modern neuroscience should be able to surpass that, and Brian Key here is simply presenting the best working theory for what underlying neurological processes are necessary (not sufficient) for consciousness. He then simply evaluates animal species by whether they have the neurobiology to support those processes. I sympathize with his frustration, to be honest, because it seems like there is extreme reluctance to even grant that modern science has made *that* degree of progress in understanding what more basic processes underlie consciousness.
> Brian Key here is simply presenting the best working theory for what underlying neurological processes are necessary (not sufficient) for consciousness. He then simply evaluates animal species by whether they have the neurobiology to support those processes.
He's presenting a plausible theory, to my understanding. But it's one of many, and the progress that we have made nonetheless does not allow for a high degree of confidence on such questions- especially when there are other competing theories, with more support from relevant experts (outside view should always remain in our sight).
And given it largely conflicts with the behavioural evidence, which is overall more consistent with fish feeling pain (although again, you were convincing here that that evidence is weaker than commonly understood), you have to have a very high degree of confidence indeed that fish lack the neural correlates in order to conclude that fish probably don't feel pain. And I just think that overconfidence is clearly unjustified, given the uncertainty that basically every expert but Brian Key professes in this area (itself a signal to trust them over him in my view).
Re: (5), it appears that the term you (& they in the linked article) are looking for is autosarcophagy or autocannabilism. Autophagy is a cellular process, and googling (including with Google Scholar) for me only produced articles involving cellular autophagy
Interesting read. I’m going to look more into whether ending eyestalk ablation actually causes the industry to farm more shrimp and catch more fish, as that’s a massive risk I don’t want to end up supporting. (If that is true, it would probably be better to focus shrimp advocacy on reducing consumption…) But I’m not convinced of the main argument.
I had also come across the Key paper and my impression is that it’s a small minority view in the field. If I was more knowledgeable about neuroscience and philosophy of mind (and hence if all the research was a lot more comprehensible to me), I’d probably be able to make a judgment based more directly on the evidence… but otherwise I’m going to base my priors on what most people in the field are saying. That seems to be very strong in the direction of “fish experience things that can be better or worse for them” and also more in favor of shrimp feeling than not feeling.
If you can make a stronger case that this is because of cancel culture, then I’d update a lot in the direction of no pain. Would like to hear more about this.
You’d have to show me very strong evidence of that, however, since the number of fish and shrimp used for food is so huge that it would be worth it to “dump money into the ocean” as I had put it) even if you assumed P(sentience) is, say, 0.01 or lower.
If there's one article I'd recommend you read to get a sense of the "skeptical" view, it would be this one. It contains a lot of references to risk of eyestalk ablation and other potential harms of the policies suggested to reduce pain. It also has a good description of some of the cancel culture aspect.
Otherwise, I understand your concern about deferring to consensus - I'm not an expert in neuroscience either. But I have been following the debate for some time, and I do believe there's serious social and political pressure to affirm fish/invertebrate pain and play to anthropomorphic sympathies in the public. And, as I mentioned in my piece, there are specific prominent examples not replicating, which should lower our confidence in arguments that are based on them, as well as communities that rapidly, uncritically embrace them.
But also, I'd say many of the disagreements with skeptics are not the result of complex technical questions about, say, Brian Key's work, but are about fairly straightforward philosophical differences that are more fundamental. Whether to privilege observations of behavior or neurobiological plausibility is a basic disagreement between skeptics and realists about fish pain here. I just think the conceptual arguments for taking a neurobiological approach are far more convincing. This by Brian Key sums up this well:
> I do believe there's serious social and political pressure to affirm fish/invertebrate pain and play to anthropomorphic sympathies in the public
The last part strikes me as somewhat implausible; pressure to play to public sympathy? For fish?! The enormous majority of people demonstrably could not care less about fish. If anything, I would expect there to be pressure in the other direction, because the enormous majority of people strongly want to believe fish suffering doesn't matter.
That's not to say that you're wrong, mind you; it's just surprising enough that I think it requires fuller justification.
Your observation is why this pressure exists. I don’t read them as saying there’s a conspiracy forcing scientists to lie about fish pain - rather, it’s just notable that believing in fish pain is a requirement for advocating certain forms of animal welfare, so you can get very invested in it as a charged topic.
> I’m going to look more into whether ending eyestalk ablation actually causes the industry to farm more shrimp and catch more fish, as that’s a massive risk I don’t want to end up supporting.
Note that a very small minority of shrimps are breeders (something like 1% if I recall correctly), as shrimps make a lot of eggs, so this can't impact the number of farmed shrimp significantly.
Great article. This is the problem IMO with a lot of overconfident analytic philosophers getting involved in tricky empirical subject matters and thinking they can use the blunt instruments of their own intuitions and utilitarianism to figure out... everything.
To be clear, the position of this “overconfident analytic philosopher,” is that it’s not obvious either way, but I weakly lean, based on the consensus of researchers in the field, towards thinking they probably feel pain. Maybe 70% odds for fish and 60% odds for shrimp—something around there. It is the other people who say that we’re so confident in this position of open scientific debate that we can safely neglect the interests of fish for practical purposes because we’re near certain that they don’t feel pain.
I think any credences somewhere near .5 are reasonable, but people who claim that this is sufficiently settled to be worth ignoring for practical purposes—over the protestations of various neuroscientists like Anil Seth—are being irrational.
If you’re not sure if creatures are conscious, then if they’re being tortured by the trillions, that is very bad!
Totally unrelated: I’ve been watching some Jay Dyer videos—how do people take him seriously? He’s such a moron!
The issue isn't even that "it’s not obvious either way, but I weakly lean, based on the consensus of researchers in the field" -- it's that it's not even clear the framing of the questions, about which you think that reasonable people will be mixed/confused, are legitimate (or the right) questions to be asking.
So we have questionable framing which sets up a series of answers which are questionable answers to those questions. But then that reasoning is used by you to argue for a bunch of conclusions in a bunch of topics where you do not respond to critics saying "yeah everything here is highly speculative and I just wrote this for fun", but make out that people who disagree with you are perpetrators of war crimes.
What framing is questionable? The thing that I think is decently likely is that fish and shrimp feel pain--they have unpleasant experiences that I wouldn't want to have!
WRT Dyer: because people in the US are scared and confused -- thanks to a bunch of populist politics (your friend Hanania has been pushing) -- and are looking for strong political authoritarians to guide them.
Dyer turns everything into martial combat and power struggle (overtalking, destruction talk, aggression). People get indoctrinated into these views. They begin to see the enemy as demon possessed, the US as controlled by a secret cabal of people with surnames like yours where the nation needs to be purified (the swamp drained) in order for a rebirth to a mythic past. Dyer and his fans are fighting a spiritual battle for the future of The West. Within that, the norms of rational discourse and philosophy we respect are completely rejected. Everything is just about displays of dominance. They only respect power at that point.
Thanks for putting this together: very clear and helpful.
I think it is pretty clear that while in the broader public there is a bias against believing invertebrates feel pain, there’s strong self-selection at work among pain researchers and animal rights activists that creates presumption in favour of animal suffering. This is part of why I find Glenn and BB’s appeal to the scientific consensus on this topic quite unpersuasive. It is not actually that hard for educated non-experts to come to reliable conclusions about the state of the scientific evidence in many scientific fields. This, I think, is one such field.
Thanks! It's a fairly niche debate, but I've been quite consumed with it recently.
As far as I can tell, the pro-pain side tends to rely heavily on two claims: that, as you say, "most" researchers disagree with the skeptics, and that fish/invertebrate behavior provides sufficiently convincing evidence that they feel pain, regardless of whether we understand how that pain could be produced.
I ultimately don't know if the first claim is actually true or not - there are certainly a lot of replies to Key's paper, but it's unclear to me if we can say a majority of researchers really disagree with him or not. Regardless, the debate appears emotionally and politically charged enough that I'm skeptical we can rely on "the consensus" to be a reliable indicator of what we should believe. (I feel similarly about things like the efficacy of child gender transition - I'm not sure consensus is necessarily a reliable indicator of what an educated non-expert person "should" think about the matter.) So I don't find that convincing in this case.
The second claim reflects a more fundamental philosophical difference I think. As another skeptic pointed out, it's possible to program robots that fulfill many of the behavioral indicators of sentience. Since robots are (presumably) not sentient, it seems to me that behavior is not a reliable indicator for consciousness. But so many people I encounter refuse to accept this. They see certain behaviors in animals as strong evidence of sentience, regardless of the neurological plausibility, and I think that's a major sticking point that tends to frustrate the skeptics.
Thank you for writing this, it’s fascinating! I forget how often I’ve read that fish as vertebrates don’t feel pain, and I hadn’t seen claims about this rooted in structural neurology (which I tend to think of as modestly less reliable evidence than many other physiological metrics). Given how functional pain is for mammals I’m now interested theories about pain as a contextual rather than universally necessary capability for vertebrates. I’ll look into it myself but I’m interested to know if you’re already aware of writing on it.
Interesting and hopefully true! Do you feel comfortable expressing your degree of credence in this conclusion in terms of a percentage? How would you respond to someone who thinks shrimp are only 1% likely to be sentient but still thinks shrimp stunners are a good EV bet because of the large numbers of shrimp affected?
I think before I could offer a percentage credence, I would have to first have a working theory for how fish etc are supposed to feel pain, as opposed to mysterianism about how certain parts of the fish brain may, in some unknown way, support fish pain. Until someone offers that, I just don't see how it's productive to talk about credences in the idea itself, especially if those credences are supposed to guide our actions in the world.
I really don't think it's a defensible position that you can't offer a probability estimate until you have a working theory of how fish are "supposed" to feel pain. Whatever the physical mechanism, there is obviously some evidence that they do, even if ultimately you think it is weak. Your credence certainly shouldn't be 100%, as you almost seem to imply! So what is it roughly? How confident are you that they don't? Because that's quite important in this context, where the scale of suffering, if it exists, is so vast.
Well, I'll ask you: what is your credence in the idea that the grass in your backyard feels pain? Some scientists have argued that plants can feel pain based on plant "behavior", such as reactions to stress.
Seems like a bit of a deflection- and if you're implying any equivalence here, I think that's very misguided. There's vastly more behavioural evidence of subjective experience of suffering in fish than grass, and vastly less mechanistic reason to doubt it, given that grass literally does not have a brain, nervous system etc.- things that seem almost certainly required for subjective experience of pain- whereas fish merely don't have certain particular structures which might plausibly be correlates (although as I say I did find this a fairly convincing argument, pending further investigation of e.g Key's work, and has reduced my credence significantly in fish pain even if it's still well north of 50%). These are *very* different propositions, even if both are primarily justified in terms of behavioural observations.
But in order to avoid hypocrisy, I'll bite: I'd give it somewhere in the region of 2%. Mechanistically, it seems almost impossible, but ultimately consciousness is still mysterious, and it's totally possible that we are completely wrong about how it arises. I have to have some scepticism of my conclusions in an area of such uncertainty, so the minimum credence I can really give to any living organism experiencing pain is probably about 2%.
Yeah, I find this persuasive given my previous training in statistics but it's frustrating to spell it out, isn't it?
In general, these "put a probability on it" arguments have weaknesses well-described in Wolfgang Spohn's book The Laws of Belief. For anyone looking at this thread and wanting more fully spelled out versions of "why not put a probability on it," I highly recommend it.
One counterargument is that such probability point estimates should also include prediction intervals. One might then argue that, given the state and quality of evidence in the field (& the impossibility of knowing whether any particular *other* thing has subjective experiences, by definition), such prediction intervals should be wide enough to include the full interval [0, 1].
If one believes that prediction distribution is uniform or unknowable, then providing a point estimate is at best useless and at worst misleading.
As a note (& as I mentioned elsewhere in the comments), “autophagy” seems to be misused in the article you cite (it’s a cellular waste removal process), and instead “autosarcophagy/autocannabilism” seem to be the terms the authors mean. However, autosarcophagy is apparently quite rare, and I can’t find any evidence for such behavior in shrimp. Would be great if you have other sources documenting the behavior!
Beyond that, I would note that the arguments from nociception are potentially more robust, given the uncertainty of both what consciousness is (many competing theories, including in neuroscience, and there’s certainly no consensus that cortical-like structures are required, though some hypothesize recurrent dynamics that some simpler brain structures don’t seem to support), and how it might be detected. Moreover, consciousness is necessary but not sufficient for pain, as demonstrated in persons with congenital insensitivity to pain (CIP): https://en.m.wikipedia.org/wiki/Congenital_insensitivity_to_pain.
So, nociception seems to be a better target to rule out if possible. A counterargument that animals without nociceptors or enough relevant nerve fibers could still feel pain could certainly be mounted. In particular, it could be that there might be different receptors and fibers involved in other orders of animals. However, given the conservation of neural receptors, that would likely be a weak argument (strong of course in the case of alien life).
Ideally, I think a better understanding of the neural correlates of pain sensation (especially in intensity and duration) would lead to better diagnostic criteria: eg, if, in humans, we can consistently predict the experience of pain from neural recordings, and observe no similar *general* (ie, not specific to a particular brain structure) correlate in some target animal, than we can more confidently infer a lack of pain sensation in that animal.
That would require a lot more advances in neural recording technology, however (fMRI is notoriously noisy, finicky, and inconsistent). Until then, *careful, well-thought-out* behavioral assays are seemingly the best proxies. Such assays would have to focus on expected behaviors for lasting pain, unfortunately, since a null, robot-like model would predict avoidance and escape of harmful events. However, such robot-like models generally wouldn’t predict the sorts of behaviors that humans and other animals like dogs exhibit with persistent pain. Certainly, there are ethical considerations to be weighed here, but, given the risks and benefits outlined well in your article, such experiments are probably justifiable. There are certainly limitations to inferences from behavior, but in many cases at least (eg, those where nociception can’t be ruled out), well-designed behavioral experiments might still be the best we have.
Great comment. I want to make reply and then some meta-commentary on why I think Brian Key's approach is so compelling to me.
First, I would say that nociception is a valuable thing to note and that, like James Rose states here, observations about the lack of nociceptors among fishes can make the case against fish pain stronger than a case that relies purely on brain structure.
>> Beyond that, I would note that the arguments from nociception are potentially more robust, given the uncertainty of both what consciousness is (many competing theories, including in neuroscience, and there’s certainly no consensus that cortical-like structures are required, though some hypothesize recurrent dynamics that some simpler brain structures don’t seem to support), and how it might be detected.
I have some frustration with the idea that science has made essentially no reliable progress in understanding the consciousness or its origins, and that therefore we need to treat the brain as a black box and look to other things to determine whether an organism has pain experience. Plato knew that consciousness was somehow related to the brain; I think we, or at least scientists who work in the field, can safely say we have moved beyond that level of understanding.
Yes, there are those like the Damasios who claim that the brain stem or some other part of the brain supports consciousness. But Key has evaluated their arguments and written about why they aren't compelling. I appreciate that he is not afraid (for lack of a better term) to take cutting edge research and theories about how and what the brain is doing to produce pain experience and apply them to answer fundamental questions about the experience of other organisms. Some might argue that this is a flaw of mine, but I find the endless treatment of consciousness as an unknowable mystery to be tiresome and something that might just be impeding our progress and understanding of it. If people really disagree with Key, they should focus their critiques on alternative physicalist explanations for consciousness rather than gesturing at the idea that other parts of the brain might be able to produce it.
I share your frustration, though have a different perspective on it. I’m actually a computational neuroscientist, and have a background in consciousness (C) studies (well, specifically, philosophy of mind). I left philosophy because it tends to go in circles. While David Chalmers laid out an excellent case for bringing C into the fold of science, he himself would argue we have yet to achieve that.
I encourage you to look up the recent adversarial competition wherein 2 of the main theories in neuroscience for C were tested, with both found wanting (Chalmers won that bet). Interestingly, 1 of these 2 (IIT, Integrated Information Theory) is self-admittedly a form of panpsychism, despite originating from a neuroscientist, Tononi.
While I could delve deeper, I think that the fact that one of the supposedly most-promising scientific theories of C today implies that non-brain entities may themselves have C supports the argument that C may simply be beyond current objective science, or at least neuroscience, well enough on its own
When C is studied more carefully in a neuroscience context, the practitioners are careful to insist they are studying the *correlates* of C, rather than C itself.
All of this is to say that, while I find C unquestionably fascinating, the science of C itself is so far beyond any framework of agreement that any specific theory can very reasonably be taken with a large shaker of salt
Hence the desire to find non-C criteria with respect to the current discussion
In summary, many (unclear what proportion) neuroscientists, certainly myself included, would argue that C is simply not a concept that necessarily fits within the present framework of objective science, whereas the correlates of C clearly do. They (we) of course may be wrong, but it means it’s difficult to use C in any consensus manner to make judgements about other mental facets, like pain
Thanks for this reply. Good catch about the autophagy/autosarcophagy distinction; I haven't looked into it in shrimp beyond what I saw in the article linked. It may be that the information in the article is wrong, so I'll update accordingly. I'm out hiking all weekend so I'll have a fuller response to yours and other comments later. Take care!
Just as a quick update, in the Wikipedia article on autosarcophagy (https://en.m.wikipedia.org/wiki/Autocannibalism#), they do mention that sometimes autophagy is used instead (they cite an article that interchangeably uses self-cannabilism and autophagy). Given the predominance in biology of autophagy as the cellular process and the availability of other clear terms, I personally think this is poor practice.
“A similar term that is applied differently is autophagy, which specifically denotes the normal process of self-degradation by cells. While typically used only for this specific process, autophagy has nonetheless occasionally been used as a general synonym for self-cannibalism.[6]”
Enjoy your hike, and look forward to any further response you might provide!
Prominent people claiming to represent the invertebrates in this debate have been claiming it would be better if they did not live, and in fact better if they were destroyed. That has not been an argument for "first do no harm," it's much more like the argument by early 20th C eugenicists that it would be kinder to "lesser races" to prevent them from being born through sterilization.
I'm with you on erring on the side of caution and kindness, too, and I wish that's what these people were arguing for rather than a mass insecticide.
Yes!! It strikes me that this is brought up less than it should be because many of the opponents of this stuff are coming from the typical POV that animals are just not as important as people. But if you take the abuse of utilitarianism here seriously it leads to horrific conclusions.
Does it matter if shrimp feel pain? No. It does not.
Should we apologize to each shrimp before we unceremoniously execute it? No, we should not.
Should we anathematize shrimp so to make their brutal killing more humane. No. We should not.
Should we pay reparations to the shrimp dynasty, should we be able to organize them into one, for the incalculable number of shrimp that have been consumed by humans since humans discovered shrimp could be eaten?
ffs
Shrimp are delicious and nutritious. I am very glad and grateful they are part of the food chain.
This reminds me of the dog mirror test. Gosh, they failed. Despite all experiential evidence to the contrary, we should assume they have no sense of self. Oh well, they're animals so they're stupid. Waaaait, they're smell dominant. Turns out they recognize their own smell. Whoops.
The amount of motivated reasoning and arrogance here is depressing. We understand very very little about even our own consciousness. We almost definitionally can't understand consciousness that is meaningfully different than our own. If you think logically, pain is an incredibly useful signal, both for education and healing. Our bias should be strongly towards the assumption that animals feel pain.
So... we have ~zero understanding except that pain is a useful signal and that we know that we don't understand all the ways it could be signaled and experienced. Of course we should ignore that and assume we do no harm because it's convenient and let's us rationalize never considering something troubling.
I have a bunch of things to say about the subject, but let me note just a few:
1) You should be highly uncertain about the subject. When lots of smart people disagree on a subject that you're not an expert on, even if you've done a decent amount of reading about it, you should basically never be more than 80% confident, especially if your view is the minority view. But the case for shrimp welfare goes through even if you're 95% sure shrimp aren't conscious, just because of the insane degree of the carnage and effectiveness.
2) I think we're largely in the dark about what ingredients are needed for consciousness. If this is right, then Key's speculation by analogy about the ingredients needed not being present in fish is highly improbable. Behavior is all we have to go off of. And shrimp behave in lots of ways that we'd expect them to if they felt pain (see my piece).
3) The sorts of things slime molds do are very rudimentary compared to what shrimp do--relevantly like a computer or mouse trap with many settings rather than a conscious being.
4) I do think you misrepresented me a bit. I'm now super confident in shrimp pain--I'd put it in the low 60s.
5) I couldn't find a source for the claim that shrimp eat their own body parts. I couldn't even find people using the word autophagy to mean eating one's own body parts https://en.wikipedia.org/wiki/Autophagy
6) The Key view implies octopi aren't conscious, which I take to be a decisive counterexample.
To 1), I'll restate what I said to Glenn above. The principal disagreements in this debate are not over complicated technical questions in neuroscience that you would need a PhD in the area to evaluate. They're fundamentally philosophical disagreements over questions like: Are observations of behavior or neurobiology the best way to determine whether an animal is conscious? And I believe a reasonably informed observer can pass judgment on these questions.
2) I fundamentally disagree with the idea that consciousness is still very mysterious and that we haven't (and possibly can't ever) make major progress in understanding consciousness. Ultimately, it's going to be explained by simpler biological processes and Key has provided an enormous body of empirical evidence supporting his theory that signal amplification (for example) is a critical process supporting consciousness. (His ideas also make intuitive sense - your body has billions of sense receptors and for any one of those signals to be felt as pain, the signal must be amplified at some point in your brain.)
I don't see anyone even trying to dispute this - most of the disagreement is, as I said, appeals to mystery (maybe there's some unknown part of the fish brain that can do this, like the optic tectum or brain stem) or appeals to behavior (fish act like they're conscious so they must be conscious and we just haven't figured out how their consciousness works yet.)
3) I'm not sure the relevance of this. Yes, slime molds have very simple behaviors relative to shrimp. Shrimp have very simple behaviors relative to humans. None of this tells us where we can fairly draw the line with respect to consciousness.
4) if you can point me to a line you think I misrepresented you on, I'll correct it.
5) See
https://www.tandfonline.com/doi/full/10.1080/23308249.2023.2257802#d1e1741
and search for "autophagy", also just google "autophagy shrimp" and studies will come up.
6) I don't see how that is the case. Do we know octopi lack the neurobiology to do the tasks Key describes? My understanding is that the octopi neurology is considerably more complex than that of fish.
Beyond that, this seems to assume that octopi behavior is just so complex it can't be unconscious, but as other skeptics have pointed out, even highly complex human behavior can be undertaken unconsciously. We need more than that to determine whether something is conscious.
Sorry for the late reply!
1) It seems wildly irrational to be sufficiently confident in one's views on an issue as difficult as figuring out which creatures are conscious that you don't take fish welfare at all seriously! This is a tough issue and the arguments don't strike me as anything like knock-down.
2) I didn't say that we haven't and can't ever make major progress. All I said was that an argument from "here is a tendentious account of which brain region gives rise to consciousness in humans, therefore something like it is needed for consciousness in animals," will not establish the kind of high confidence needed. Just seems nuts!
3) But they don't have any behaviors that seem to indicate specifically consciousness. For instance, they don't respond to anesthetic in the ways that shrimp and fish do (something I take to be strong evidence--if shrimp didn't feel pain, it's unlikely they'd respond to anesthetic in the same way a creature feeling pain would).
5) I couldn't find anything serious about this other than a random assertion from Key somewhere.
6) Key has an entire paper about this! https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2018.01027/full. Suffice it to say, just as I think if your theory implies dogs aren't conscious, that's a big problem, a view that implies that octopi aren't conscious is just nuts! This is obvious if you simply see octopi--they're like little babies!
Out of town for the long weekend but will respond soon!
Perhaps this is the wrong place to address a question to Bentham, but on point 1) - can you point to an argument about cumulative suffering vs peak suffering? I see the argument about caution, but it seems we can be confident that shrimp pain is low and I’m wondering if say 5x quantity of pain at 1/5 intensivity really adds up to 1? I would happily be hurt thousands of times at a barely perceptiable rate than once at a high rate. Just curious to pull this apart!
It's simply the case that consciousness is still at least somewhat mysterious, though. Even the easy problems, and the neural correlates, are the subject of significant disagreement amongst experts. And BB didn't claim that we "haven't made major progress", let alone that we can't ever, so that seems like a bit of a straw man and/or motte and bailey. The question isn't whether we have made progress, but whether we *know* that consciousness requires a particular neurobiological feature. And we simply do not know that, even if Key makes a convincing argument (I haven't read it in full yet); if only for outside view reasons, we cannot ever be too confident that we as laymen are correct and a majority of academic experts are incorrect (even if you do identify a plausible way academics could become misguided on this, in terms of social pressure etc.). No matter how convincing we find the argument, it's just not rational to be certain that we know better than most domain experts; we have to retain at least some level of scepticism.
Your post has caused me to lower my confidence in fish feeling pain, for sure, which I did think was basically settled. But it raises alarm, and prevents me from updating as much as I might otherwise, that you are expressing such clearly unjustified overconfidence about the causal mechanisms underlying consciousness. In much the same way that you rightly suggest we should lower our credence in response to advocates eg ignoring failures to replicate (and thank you for pointing this out- I am very disappointed to have been misled on that front), I can't help but adjust mine in your argument when you seem to be circumventing a key objection with apparently indefensible certainty about a pivotal crux of disagreement.
>> And BB didn't claim that we "haven't made major progress", let alone that we can't ever, so that seems like a bit of a straw man and/or motte and bailey. The question isn't whether we have made progress, but whether we *know* that consciousness requires a particular neurobiological feature.
What progress, though? Is it that consciousness is in some way related to brains? Even the Ancient Greeks were aware of that relation. Modern neuroscience should be able to surpass that, and Brian Key here is simply presenting the best working theory for what underlying neurological processes are necessary (not sufficient) for consciousness. He then simply evaluates animal species by whether they have the neurobiology to support those processes. I sympathize with his frustration, to be honest, because it seems like there is extreme reluctance to even grant that modern science has made *that* degree of progress in understanding what more basic processes underlie consciousness.
> Brian Key here is simply presenting the best working theory for what underlying neurological processes are necessary (not sufficient) for consciousness. He then simply evaluates animal species by whether they have the neurobiology to support those processes.
He's presenting a plausible theory, to my understanding. But it's one of many, and the progress that we have made nonetheless does not allow for a high degree of confidence on such questions- especially when there are other competing theories, with more support from relevant experts (outside view should always remain in our sight).
And given it largely conflicts with the behavioural evidence, which is overall more consistent with fish feeling pain (although again, you were convincing here that that evidence is weaker than commonly understood), you have to have a very high degree of confidence indeed that fish lack the neural correlates in order to conclude that fish probably don't feel pain. And I just think that overconfidence is clearly unjustified, given the uncertainty that basically every expert but Brian Key professes in this area (itself a signal to trust them over him in my view).
Re: (5), it appears that the term you (& they in the linked article) are looking for is autosarcophagy or autocannabilism. Autophagy is a cellular process, and googling (including with Google Scholar) for me only produced articles involving cellular autophagy
https://en.m.wikipedia.org/wiki/Autocannibalism
However, I still haven’t found any papers or evidence outside of the one you linked documenting autosarcophagy in shrimp
Interesting read. I’m going to look more into whether ending eyestalk ablation actually causes the industry to farm more shrimp and catch more fish, as that’s a massive risk I don’t want to end up supporting. (If that is true, it would probably be better to focus shrimp advocacy on reducing consumption…) But I’m not convinced of the main argument.
I had also come across the Key paper and my impression is that it’s a small minority view in the field. If I was more knowledgeable about neuroscience and philosophy of mind (and hence if all the research was a lot more comprehensible to me), I’d probably be able to make a judgment based more directly on the evidence… but otherwise I’m going to base my priors on what most people in the field are saying. That seems to be very strong in the direction of “fish experience things that can be better or worse for them” and also more in favor of shrimp feeling than not feeling.
If you can make a stronger case that this is because of cancel culture, then I’d update a lot in the direction of no pain. Would like to hear more about this.
You’d have to show me very strong evidence of that, however, since the number of fish and shrimp used for food is so huge that it would be worth it to “dump money into the ocean” as I had put it) even if you assumed P(sentience) is, say, 0.01 or lower.
If there's one article I'd recommend you read to get a sense of the "skeptical" view, it would be this one. It contains a lot of references to risk of eyestalk ablation and other potential harms of the policies suggested to reduce pain. It also has a good description of some of the cancel culture aspect.
https://www.tandfonline.com/doi/full/10.1080/23308249.2023.2257802#
Otherwise, I understand your concern about deferring to consensus - I'm not an expert in neuroscience either. But I have been following the debate for some time, and I do believe there's serious social and political pressure to affirm fish/invertebrate pain and play to anthropomorphic sympathies in the public. And, as I mentioned in my piece, there are specific prominent examples not replicating, which should lower our confidence in arguments that are based on them, as well as communities that rapidly, uncritically embrace them.
But also, I'd say many of the disagreements with skeptics are not the result of complex technical questions about, say, Brian Key's work, but are about fairly straightforward philosophical differences that are more fundamental. Whether to privilege observations of behavior or neurobiological plausibility is a basic disagreement between skeptics and realists about fish pain here. I just think the conceptual arguments for taking a neurobiological approach are far more convincing. This by Brian Key sums up this well:
https://www.wellbeingintlstudiesrepository.org/animsent/vol1/iss3/44/
> I do believe there's serious social and political pressure to affirm fish/invertebrate pain and play to anthropomorphic sympathies in the public
The last part strikes me as somewhat implausible; pressure to play to public sympathy? For fish?! The enormous majority of people demonstrably could not care less about fish. If anything, I would expect there to be pressure in the other direction, because the enormous majority of people strongly want to believe fish suffering doesn't matter.
That's not to say that you're wrong, mind you; it's just surprising enough that I think it requires fuller justification.
Your observation is why this pressure exists. I don’t read them as saying there’s a conspiracy forcing scientists to lie about fish pain - rather, it’s just notable that believing in fish pain is a requirement for advocating certain forms of animal welfare, so you can get very invested in it as a charged topic.
> I’m going to look more into whether ending eyestalk ablation actually causes the industry to farm more shrimp and catch more fish, as that’s a massive risk I don’t want to end up supporting.
Note that a very small minority of shrimps are breeders (something like 1% if I recall correctly), as shrimps make a lot of eggs, so this can't impact the number of farmed shrimp significantly.
Also SWP here https://www.shrimpwelfareproject.org/why-stop-using-eyestalk-ablation proposes "closed-cycle breeding as a beneficial alternative to ESA" where farms wouldn't rely on wild breeding animals.
Great article. This is the problem IMO with a lot of overconfident analytic philosophers getting involved in tricky empirical subject matters and thinking they can use the blunt instruments of their own intuitions and utilitarianism to figure out... everything.
To be clear, the position of this “overconfident analytic philosopher,” is that it’s not obvious either way, but I weakly lean, based on the consensus of researchers in the field, towards thinking they probably feel pain. Maybe 70% odds for fish and 60% odds for shrimp—something around there. It is the other people who say that we’re so confident in this position of open scientific debate that we can safely neglect the interests of fish for practical purposes because we’re near certain that they don’t feel pain.
I think any credences somewhere near .5 are reasonable, but people who claim that this is sufficiently settled to be worth ignoring for practical purposes—over the protestations of various neuroscientists like Anil Seth—are being irrational.
If you’re not sure if creatures are conscious, then if they’re being tortured by the trillions, that is very bad!
Totally unrelated: I’ve been watching some Jay Dyer videos—how do people take him seriously? He’s such a moron!
The issue isn't even that "it’s not obvious either way, but I weakly lean, based on the consensus of researchers in the field" -- it's that it's not even clear the framing of the questions, about which you think that reasonable people will be mixed/confused, are legitimate (or the right) questions to be asking.
So we have questionable framing which sets up a series of answers which are questionable answers to those questions. But then that reasoning is used by you to argue for a bunch of conclusions in a bunch of topics where you do not respond to critics saying "yeah everything here is highly speculative and I just wrote this for fun", but make out that people who disagree with you are perpetrators of war crimes.
What framing is questionable? The thing that I think is decently likely is that fish and shrimp feel pain--they have unpleasant experiences that I wouldn't want to have!
WRT Dyer: because people in the US are scared and confused -- thanks to a bunch of populist politics (your friend Hanania has been pushing) -- and are looking for strong political authoritarians to guide them.
Dyer turns everything into martial combat and power struggle (overtalking, destruction talk, aggression). People get indoctrinated into these views. They begin to see the enemy as demon possessed, the US as controlled by a secret cabal of people with surnames like yours where the nation needs to be purified (the swamp drained) in order for a rebirth to a mythic past. Dyer and his fans are fighting a spiritual battle for the future of The West. Within that, the norms of rational discourse and philosophy we respect are completely rejected. Everything is just about displays of dominance. They only respect power at that point.
Thanks for putting this together: very clear and helpful.
I think it is pretty clear that while in the broader public there is a bias against believing invertebrates feel pain, there’s strong self-selection at work among pain researchers and animal rights activists that creates presumption in favour of animal suffering. This is part of why I find Glenn and BB’s appeal to the scientific consensus on this topic quite unpersuasive. It is not actually that hard for educated non-experts to come to reliable conclusions about the state of the scientific evidence in many scientific fields. This, I think, is one such field.
Thanks! It's a fairly niche debate, but I've been quite consumed with it recently.
As far as I can tell, the pro-pain side tends to rely heavily on two claims: that, as you say, "most" researchers disagree with the skeptics, and that fish/invertebrate behavior provides sufficiently convincing evidence that they feel pain, regardless of whether we understand how that pain could be produced.
I ultimately don't know if the first claim is actually true or not - there are certainly a lot of replies to Key's paper, but it's unclear to me if we can say a majority of researchers really disagree with him or not. Regardless, the debate appears emotionally and politically charged enough that I'm skeptical we can rely on "the consensus" to be a reliable indicator of what we should believe. (I feel similarly about things like the efficacy of child gender transition - I'm not sure consensus is necessarily a reliable indicator of what an educated non-expert person "should" think about the matter.) So I don't find that convincing in this case.
The second claim reflects a more fundamental philosophical difference I think. As another skeptic pointed out, it's possible to program robots that fulfill many of the behavioral indicators of sentience. Since robots are (presumably) not sentient, it seems to me that behavior is not a reliable indicator for consciousness. But so many people I encounter refuse to accept this. They see certain behaviors in animals as strong evidence of sentience, regardless of the neurological plausibility, and I think that's a major sticking point that tends to frustrate the skeptics.
Thank you for writing this, it’s fascinating! I forget how often I’ve read that fish as vertebrates don’t feel pain, and I hadn’t seen claims about this rooted in structural neurology (which I tend to think of as modestly less reliable evidence than many other physiological metrics). Given how functional pain is for mammals I’m now interested theories about pain as a contextual rather than universally necessary capability for vertebrates. I’ll look into it myself but I’m interested to know if you’re already aware of writing on it.
Great work!
Very strange to see people commenting about your degree of confidence as if anyone in this debate has been modeling humility thus far lmao
Interesting and hopefully true! Do you feel comfortable expressing your degree of credence in this conclusion in terms of a percentage? How would you respond to someone who thinks shrimp are only 1% likely to be sentient but still thinks shrimp stunners are a good EV bet because of the large numbers of shrimp affected?
Thank you!
I think before I could offer a percentage credence, I would have to first have a working theory for how fish etc are supposed to feel pain, as opposed to mysterianism about how certain parts of the fish brain may, in some unknown way, support fish pain. Until someone offers that, I just don't see how it's productive to talk about credences in the idea itself, especially if those credences are supposed to guide our actions in the world.
https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/
I really don't think it's a defensible position that you can't offer a probability estimate until you have a working theory of how fish are "supposed" to feel pain. Whatever the physical mechanism, there is obviously some evidence that they do, even if ultimately you think it is weak. Your credence certainly shouldn't be 100%, as you almost seem to imply! So what is it roughly? How confident are you that they don't? Because that's quite important in this context, where the scale of suffering, if it exists, is so vast.
Well, I'll ask you: what is your credence in the idea that the grass in your backyard feels pain? Some scientists have argued that plants can feel pain based on plant "behavior", such as reactions to stress.
Seems like a bit of a deflection- and if you're implying any equivalence here, I think that's very misguided. There's vastly more behavioural evidence of subjective experience of suffering in fish than grass, and vastly less mechanistic reason to doubt it, given that grass literally does not have a brain, nervous system etc.- things that seem almost certainly required for subjective experience of pain- whereas fish merely don't have certain particular structures which might plausibly be correlates (although as I say I did find this a fairly convincing argument, pending further investigation of e.g Key's work, and has reduced my credence significantly in fish pain even if it's still well north of 50%). These are *very* different propositions, even if both are primarily justified in terms of behavioural observations.
But in order to avoid hypocrisy, I'll bite: I'd give it somewhere in the region of 2%. Mechanistically, it seems almost impossible, but ultimately consciousness is still mysterious, and it's totally possible that we are completely wrong about how it arises. I have to have some scepticism of my conclusions in an area of such uncertainty, so the minimum credence I can really give to any living organism experiencing pain is probably about 2%.
Yeah, I find this persuasive given my previous training in statistics but it's frustrating to spell it out, isn't it?
In general, these "put a probability on it" arguments have weaknesses well-described in Wolfgang Spohn's book The Laws of Belief. For anyone looking at this thread and wanting more fully spelled out versions of "why not put a probability on it," I highly recommend it.
One counterargument is that such probability point estimates should also include prediction intervals. One might then argue that, given the state and quality of evidence in the field (& the impossibility of knowing whether any particular *other* thing has subjective experiences, by definition), such prediction intervals should be wide enough to include the full interval [0, 1].
If one believes that prediction distribution is uniform or unknowable, then providing a point estimate is at best useless and at worst misleading.
Excellent article!
As a note (& as I mentioned elsewhere in the comments), “autophagy” seems to be misused in the article you cite (it’s a cellular waste removal process), and instead “autosarcophagy/autocannabilism” seem to be the terms the authors mean. However, autosarcophagy is apparently quite rare, and I can’t find any evidence for such behavior in shrimp. Would be great if you have other sources documenting the behavior!
Beyond that, I would note that the arguments from nociception are potentially more robust, given the uncertainty of both what consciousness is (many competing theories, including in neuroscience, and there’s certainly no consensus that cortical-like structures are required, though some hypothesize recurrent dynamics that some simpler brain structures don’t seem to support), and how it might be detected. Moreover, consciousness is necessary but not sufficient for pain, as demonstrated in persons with congenital insensitivity to pain (CIP): https://en.m.wikipedia.org/wiki/Congenital_insensitivity_to_pain.
So, nociception seems to be a better target to rule out if possible. A counterargument that animals without nociceptors or enough relevant nerve fibers could still feel pain could certainly be mounted. In particular, it could be that there might be different receptors and fibers involved in other orders of animals. However, given the conservation of neural receptors, that would likely be a weak argument (strong of course in the case of alien life).
Ideally, I think a better understanding of the neural correlates of pain sensation (especially in intensity and duration) would lead to better diagnostic criteria: eg, if, in humans, we can consistently predict the experience of pain from neural recordings, and observe no similar *general* (ie, not specific to a particular brain structure) correlate in some target animal, than we can more confidently infer a lack of pain sensation in that animal.
That would require a lot more advances in neural recording technology, however (fMRI is notoriously noisy, finicky, and inconsistent). Until then, *careful, well-thought-out* behavioral assays are seemingly the best proxies. Such assays would have to focus on expected behaviors for lasting pain, unfortunately, since a null, robot-like model would predict avoidance and escape of harmful events. However, such robot-like models generally wouldn’t predict the sorts of behaviors that humans and other animals like dogs exhibit with persistent pain. Certainly, there are ethical considerations to be weighed here, but, given the risks and benefits outlined well in your article, such experiments are probably justifiable. There are certainly limitations to inferences from behavior, but in many cases at least (eg, those where nociception can’t be ruled out), well-designed behavioral experiments might still be the best we have.
Great comment. I want to make reply and then some meta-commentary on why I think Brian Key's approach is so compelling to me.
First, I would say that nociception is a valuable thing to note and that, like James Rose states here, observations about the lack of nociceptors among fishes can make the case against fish pain stronger than a case that relies purely on brain structure.
https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1049&context=animsent
That said, I disagree with this:
>> Beyond that, I would note that the arguments from nociception are potentially more robust, given the uncertainty of both what consciousness is (many competing theories, including in neuroscience, and there’s certainly no consensus that cortical-like structures are required, though some hypothesize recurrent dynamics that some simpler brain structures don’t seem to support), and how it might be detected.
I have some frustration with the idea that science has made essentially no reliable progress in understanding the consciousness or its origins, and that therefore we need to treat the brain as a black box and look to other things to determine whether an organism has pain experience. Plato knew that consciousness was somehow related to the brain; I think we, or at least scientists who work in the field, can safely say we have moved beyond that level of understanding.
Yes, there are those like the Damasios who claim that the brain stem or some other part of the brain supports consciousness. But Key has evaluated their arguments and written about why they aren't compelling. I appreciate that he is not afraid (for lack of a better term) to take cutting edge research and theories about how and what the brain is doing to produce pain experience and apply them to answer fundamental questions about the experience of other organisms. Some might argue that this is a flaw of mine, but I find the endless treatment of consciousness as an unknowable mystery to be tiresome and something that might just be impeding our progress and understanding of it. If people really disagree with Key, they should focus their critiques on alternative physicalist explanations for consciousness rather than gesturing at the idea that other parts of the brain might be able to produce it.
I share your frustration, though have a different perspective on it. I’m actually a computational neuroscientist, and have a background in consciousness (C) studies (well, specifically, philosophy of mind). I left philosophy because it tends to go in circles. While David Chalmers laid out an excellent case for bringing C into the fold of science, he himself would argue we have yet to achieve that.
I encourage you to look up the recent adversarial competition wherein 2 of the main theories in neuroscience for C were tested, with both found wanting (Chalmers won that bet). Interestingly, 1 of these 2 (IIT, Integrated Information Theory) is self-admittedly a form of panpsychism, despite originating from a neuroscientist, Tononi.
While I could delve deeper, I think that the fact that one of the supposedly most-promising scientific theories of C today implies that non-brain entities may themselves have C supports the argument that C may simply be beyond current objective science, or at least neuroscience, well enough on its own
When C is studied more carefully in a neuroscience context, the practitioners are careful to insist they are studying the *correlates* of C, rather than C itself.
All of this is to say that, while I find C unquestionably fascinating, the science of C itself is so far beyond any framework of agreement that any specific theory can very reasonably be taken with a large shaker of salt
Hence the desire to find non-C criteria with respect to the current discussion
In summary, many (unclear what proportion) neuroscientists, certainly myself included, would argue that C is simply not a concept that necessarily fits within the present framework of objective science, whereas the correlates of C clearly do. They (we) of course may be wrong, but it means it’s difficult to use C in any consensus manner to make judgements about other mental facets, like pain
Thanks for this reply. Good catch about the autophagy/autosarcophagy distinction; I haven't looked into it in shrimp beyond what I saw in the article linked. It may be that the information in the article is wrong, so I'll update accordingly. I'm out hiking all weekend so I'll have a fuller response to yours and other comments later. Take care!
Just as a quick update, in the Wikipedia article on autosarcophagy (https://en.m.wikipedia.org/wiki/Autocannibalism#), they do mention that sometimes autophagy is used instead (they cite an article that interchangeably uses self-cannabilism and autophagy). Given the predominance in biology of autophagy as the cellular process and the availability of other clear terms, I personally think this is poor practice.
“A similar term that is applied differently is autophagy, which specifically denotes the normal process of self-degradation by cells. While typically used only for this specific process, autophagy has nonetheless occasionally been used as a general synonym for self-cannibalism.[6]”
Enjoy your hike, and look forward to any further response you might provide!
By the way, quite looking forward to your reply in the comments in BB’s recent piece. The analgesic response to me is the most compelling datum
https://benthams.substack.com/p/bad-arguments-against-fish-pain
Working on it ;)
For decades, Dr.s denied infants pain relief, saying they didn't feel pain.
Dr.s experimented on Black women without anesthesia, saying they didn't feel pain.
Researchers have (and do) experimented on animals without pain management because they said animals don't feel pain like humans do.
All of these, of course, proved false.
I say we err on the side of caution and kindness.
When in doubt, do no harm.
Prominent people claiming to represent the invertebrates in this debate have been claiming it would be better if they did not live, and in fact better if they were destroyed. That has not been an argument for "first do no harm," it's much more like the argument by early 20th C eugenicists that it would be kinder to "lesser races" to prevent them from being born through sterilization.
I'm with you on erring on the side of caution and kindness, too, and I wish that's what these people were arguing for rather than a mass insecticide.
Yes!! It strikes me that this is brought up less than it should be because many of the opponents of this stuff are coming from the typical POV that animals are just not as important as people. But if you take the abuse of utilitarianism here seriously it leads to horrific conclusions.
So you're saying shrimp need this movement like a fish needs a bicycle?
Does it matter if shrimp feel pain? No. It does not.
Should we apologize to each shrimp before we unceremoniously execute it? No, we should not.
Should we anathematize shrimp so to make their brutal killing more humane. No. We should not.
Should we pay reparations to the shrimp dynasty, should we be able to organize them into one, for the incalculable number of shrimp that have been consumed by humans since humans discovered shrimp could be eaten?
ffs
Shrimp are delicious and nutritious. I am very glad and grateful they are part of the food chain.
This reminds me of the dog mirror test. Gosh, they failed. Despite all experiential evidence to the contrary, we should assume they have no sense of self. Oh well, they're animals so they're stupid. Waaaait, they're smell dominant. Turns out they recognize their own smell. Whoops.
The amount of motivated reasoning and arrogance here is depressing. We understand very very little about even our own consciousness. We almost definitionally can't understand consciousness that is meaningfully different than our own. If you think logically, pain is an incredibly useful signal, both for education and healing. Our bias should be strongly towards the assumption that animals feel pain.
So... we have ~zero understanding except that pain is a useful signal and that we know that we don't understand all the ways it could be signaled and experienced. Of course we should ignore that and assume we do no harm because it's convenient and let's us rationalize never considering something troubling.
Is “pain” defined in this article?