As a follow-up on
my previous post about "
multiple realizability," here is Putnam on the thesis:
if we can find even one psychological predicate which can clearly be applied to both a mammal and an octopus (say, "hungry"), but whose physical-chemical "correlate" is different in the two cases, the brain state theory has collapsed. It seems to me overwhelmingly probable that we can do this (Mind, Language, Reality, p. 436).
But can we? What reason is there to believe that an octopus, say, is hungry like us or
feels pain like us? It seems that we have to resort to analogical arguments in this case. For example:
- When human beings eat, it usually means that they feel hungry.
- Like human beings, octopuses also eat.
- (Therefore) Like human beings, when octopuses eat, it usually means that they feel hungry.
Similarly:
- When a person's finger is pricked with a needle, s/he withdraws his/her finger because s/he feels pain.
- Like humans, when a chimpanzee's finger is pricked with a needle, it withdraws its finger.
- (Therefore) Like humans, when a chimpanzee's finger it pricked with a needle, it withdraws its finger because it feels pain.
What do you make of these analogical arguments? Are they strong? If not, what support is there for the multiple realizability thesis?
Although this problem is often posed as an empirical thesis, it can work as simply a hypothetical. We can imagine an entity which experiences pain, although the "physical-chemical correlate" is completely different. There is nothing logically impossible about it unless one begs the question. If you want to say that a human and a reptile and an alien don't all feel pain then you run the risk of species chauvinism. We certainly don't want to say that all animals are simply not conscious; in fact, I find it quite obvious that my dog Sophie is conscious. I also find the distinction between nociceptive and affective pain unhelpful in cases like Sophie's. For example, if I accidentally step on Sophie's tail she will exhibit a nociceptive reaction - namely, she will jump up in pain. Sophie will also exhibit an affective reaction, by cowering in the corner as if she has done something wrong - even though it was purely accidental that I've stepped on her tail.
ReplyDeleteThere are two responses one might give. First, one might question how I know Sophie is exhibiting an affective response at all. Again, this looks like an obvious case of "speciesism" about consciousness. Sophie knows and trusts me, she knows that I take care of her and feed her, so if I have done her physical harm this is out of character for me. She is feeling hurt. (Pun intended.) I can safely assume this the same way we all safely assume that each other is having certain intentional states like beliefs, desires, etc. We assume that if we were in someone else's position, and they are fully rational and in possession of their normal capacities, they would be thinking X. Note also that this doesn't somehow pragmatically solve the "problem of other minds." In real life, the problem of other minds doesn't exist.¬ Granted then, on the scale of biological taxonomy dogs are much closer to us than reptiles or octopi or aliens. This doesn’t preclude reptiles or octopi or aliens a priori from these sorts of affective responses, it merely makes it more difficult for us as humans to adopt the intentional stance towards them. Moreover, the fact that there seems to be (as Putnam claims) at least one counterexample, namely Sophie, suggests the type physicalist thesis is lacking in some way.
The second response might grant all of this, and still maintain that MR is flawed because Sophie and I do not feel pains “in the same way.” What this means I do not know. I certainly don’t know what it’s like to feel the pain of my tail being stepped on because I do not have a tail. However I would hazard a guess that when Sophie is bitten by a mosquito on her belly, and then scratches it, it likely feels something like what mosquito bites feel like to me. Similarly, when Sophie is bitten by another dog while playing, it probably feels something like what it feels like to me when Sophie and I are playing and she bites at my hand. I can guess at all this by adopting the intentional stance, but what I have left out is the empirical evidence. What I wonder is, what sort of empirical evidence would validate these sorts of claims? Taking into consideration that pains are inherently a subjective experience, I doubt that any such evidence is available. Until we can successfully account for consciousness and qualia we won’t have any better answers than we do by adopting the intentional stance. It’s easy to attribute intentionality to other humans, easy to attribute intentionality to animals similar to us in the biological taxonomy, and much harder to attribute to remote entities like aliens and computers – but this says more about our own epistemic limitations than about the unlikeliness of MR.
If the MR thesis is a conceivability claim, as you seem to be saying, then it says that it is conceivable that the same mental kind is realized by distinct physical kinds. But what follows from that? Even if the physicalist is willing to grant that whatever is conceivable is logically possible, all that follows from “it is conceivable that the same mental kind is realized by distinct physical kinds” is that “it is logically possible that the same mental kind is realized by distinct physical kinds,” or, if you prefer, that there is a possible world in which mental kinds are realized by distinct physical kinds. But so what? How do we know that this possible world, where mental kinds are realized by distinct physical kinds, is the actual world? After all, physicalism is a thesis about the actual world, not some possible world.
Delete