חדש באתר: NotebookLM עם כל תכני הרב מיכאל אברהם. דומה למיכי בוט.

A Mathematical Perspective on Consequentialist Ethics – Kagan and the Heap Paradox (Column 253)

Back to list  |  🌐 עברית
This is an English translation (via GPT-5.4). Read the original Hebrew version.

With God's help

In the previous column I described Parfit's efforts to find a utilitarian basis for morality and to defend it against the challenges posed by dilemmas of collective action. My conclusion there was that, at least within Parfit's model, there is no way to do so. I will now continue the discussion to the next stage: Kagan's argument.

Kagan

Shelly Kagan's article was written in response to the aforementioned chapter in Parfit's book. It points to many contemporary implications of his analysis for issues that loom large in our world (ecology and air pollution, vegetarianism, and the like). A considerable part of his article is devoted to presenting the problems and distinguishing between different types of them (and there are differences). The main distinction in the article is between collective actions that involve a threshold of effect (that is, a minimum quantity of actions that creates an effect, whereas below that there is nothing at all) and cumulative situations (that is, situations in which each action contributes a very small amount, and only the accumulated contribution of many actions is noticeable. But there is no defined minimum threshold). For example, in elections in Israel there is a threshold of effect: when one reaches a certain number of voters (roughly, the number of votes needed for one seat), a significant effect is created. Until then, there is no effect. Cases of cumulative effect (without a threshold) appear in examples such as air pollution or the darkening of the sky by pollutant particles, where there is no defined threshold and each person contributes his negligible share.

Elections are a clear example of collective action with a threshold of effect. For the purposes of the discussion below, let us take an analytically convenient example of collective action without a threshold. Imagine a person strapped to a torture machine. The machine has a thousand buttons, each of which sends a tiny electric current into the person's body. If all the buttons are pressed, a very strong current is created and the suffering of that person is very great. But the effect of any single button is imperceptible. Now a thousand people are called upon, each to press one of the thousand buttons. Each of them wonders whether to press his own button, when it is clear that his press by itself is insignificant (the victim will not feel its results at all). But if they all press their buttons, a very strong current is created, and therefore tremendous suffering. In such a situation, is there a moral prohibition on any given person pressing his button? In this example there is no minimum threshold; rather, each press adds something tiny and imperceptible to the overall pain, and the suffering is produced by the accumulation of all the presses.

Why is this distinction important? Kagan explains that cases in which there is a threshold are resolved, to his satisfaction, by the model Parfit proposed (that is, calculating the expected value of the individual act in such cases yields a meaningful result, and therefore it is forbidden to do it), as we saw in the previous column regarding elections. The benefit is the general result obtained at the threshold state divided by the number of people (this is the probability that I am the one who caused the threshold to be reached), multiplied by the number of people to whom the harm or benefit is caused. Kagan's main addition is that in cases without a threshold, all this calculation is irrelevant. If we calculate the expected value of the benefit (or suffering) caused by each button press on the torture machine, we get something null and insignificant. Therefore, if a person is paid ten shekels for such a press, consequentialist reasoning says that from his standpoint it is entirely legitimate to press the button.[1]

He is therefore left to examine how one might solve the thresholdless examples, that is, how one can justify a prohibition on air pollution or pressing a button on the torture machine on the basis of act-consequentialist considerations. Kagan's answer is astonishingly simple, if surprising: there simply are no such cases. He argues that all cases of collective action are cases with a threshold (regarding which Parfit already offered a satisfactory solution, as above). There are no examples that are genuinely cumulative in character, that is, in which no threshold exists. He explains this by means of the argument I will now describe.

Kagan's argument against the existence of thresholdless collective actions

To sharpen the argument, I will focus on the case of the torture machine. We ask the victim whether it hurts, and he answers yes or no. When one button is pressed, the victim says it does not hurt. When all are pressed, he says it does hurt. If so, there must be some number of buttons from which point onward the victim will answer that it hurts. Therefore, it is clear that even in such a case there is a minimum threshold of harm (a negative result). So this is not a case of cumulative collective action but of action with a threshold. The same, of course, applies to all cases of cumulative actions, by the same logic.

Kagan himself clarifies that he does not mean that if we press the buttons one after another, the victim will be able at some stage to tell us exactly when it began to hurt. He too agrees that the difference in moving from one particular number of buttons to the next is imperceptible. If, in the process I described, we ask after each additional button whether it hurts more than in the previous state, he presumably will not be able at any stage to point to a jump in the level of pain. But in his hypothetical experiment we ask the victim an absolute and not a relative question: "Does it hurt?" In such an absolute question, the comparison at each number of buttons is made with a state in which no button is pressed (state 0). This is not an ongoing experiment but a collection of many different experiments, each conducted separately. Each time a different number of buttons is pressed and we ask whether it hurts. Kagan's argument shows that there must necessarily be an experiment with some particular number of buttons pressed in which we already receive the answer that it hurts. At every number from that point upward, the victim will answer that it hurts.

Let us formulate this differently, in a way very similar to the Sorites paradox. We have three assumptions, each of which seems self-evident (they are simply facts):

  • There is no difference in the level of pain between a state with n buttons pressed and a state with n+1. Adding a button does not change the sensation of pain.[2]
  • When no button is pressed, there is no pain.
  • When all the buttons are pressed, there is pain.

But the combination of the first two assumptions obviously contradicts the third. After all, if 0 buttons do not hurt, and adding a button does not change the situation, then one button also does not hurt. And if one button does not hurt, then according to the first assumption two buttons also do not hurt. And so on, up to a thousand buttons, at which point there is still no pain. But that contradicts the third assumption. Something is rotten in the state of Denmark. The natural candidate is the first assumption. It seems that the first assumption is not true. In other words, we have proved that there is some number of buttons from which point onward it does begin to hurt our victim (again, the issue is not the difference from the previous state, but the comparison to the state of zero buttons).

I will add two further necessary clarifications (which Kagan himself already noted). First, he does not mean that there is some number of buttons from which point onward the pain is already full-blown. The claim is that there is some number of buttons at which the pain is already felt relative to the initial state. Of course, this is still not necessarily the level of pain present when all thousand buttons are pressed, but it is enough to say that the assumption that each additional button has no perceptible effect is incorrect. Second, Kagan does not mean to claim that if we repeat this experiment we will always get the same number. Certainly not. The claim is that there will always be such a number (=threshold) between 0 and 1000, and therefore, in his view, in examples of cumulative collective action there must be a threshold. There are no collective actions that are truly cumulative.

Critiques of Kagan's argument

I must say that, on its face, Kagan's discussion seems very strange to me. Even if we assume he is right, that is, that there really is some threshold in the case of the torture machine, I still do not see how Parfit's model applies here. After all, the expected value of the harm I cause in this case is negligible. Whether we define the harm I cause in probabilistic terms (what is the chance that I am the one who pressed the threshold button, that is, the one who changed the situation) or in per-person-result terms (the pain caused after the threshold divided by the number of people who participated in producing it—a quotient that, for present purposes, we may assume to be as tiny as we like), the negative result attributed to me is utterly trivial. Note that unlike the example of elections, here there is no need to multiply the harm by a large number of people, since the suffering is caused to only one person (the victim). True, on the assumption that not pressing the button carries no cost whatsoever, perhaps even a very small negative result is enough to define pressing it as an immoral act. But if we speak of a situation in which I am paid ten shekels to press it, I no longer see any consequentialist justification for saying that it is forbidden to press.

But even if we ignore this objection, there is an obvious objection by Julia Nefsky to Kagan's argument.[3] Nefsky argues that pain is a vague thing, so that one cannot know exactly what level of pain one is feeling. At this point the familiar argument against vague concepts arises, what we have often called the heap paradox. She argues that if we apply Kagan's logic to other vague concepts, we will arrive at absurdities, and this proves that his logic is incorrect.[4]

Let us illustrate the heap paradox using the concept of a "bald person":

  1. Assumption: no single hair on a person's head changes his status (from being bald to having hair).
  2. Assumption: a person with 0 hairs is bald.
  3. Conclusion: a person with 100,000 hairs is bald.
  4. Assumption: it cannot be that a person with 100,000 hairs is bald.
  5. Therefore, assumption 1 is false.

Kagan's argument that we encountered above looks very similar. And if this logic leads to a contradiction regarding baldness, Nefsky argues, then presumably it also fails with regard to the example of pain in the torture machine.

Rejecting Nefsky's objection

A careful reading of Kagan's article shows that he was well aware of the heap paradox. That is why he was careful to say that he is speaking about a comparison between every state and state 0, and not between every state and the one preceding it. He is careful not to make such a direct comparison, because there he admits that the answer will never be positive. The meaning of this is that, from his standpoint, assumption 1, which leads to the paradox in the heap argument presented by Nefsky, speaks about the addition of the button now being pressed relative to the previous state (that is, the pain added in the state of 768 buttons as compared to 767 buttons). But, as noted, his argument is not about the direct relation between those two states, but about the question of when the victim will report a perceptible addition of pain relative to state 0. Here, he argues, it is clear that at some stage the answer will be positive. This is a different argument, and here the contradiction of the heap paradox is not generated. Put differently, the heap paradox leads to the conclusion that the concept "bald" is not binary. That is, there is no sharp line (some particular number of hairs) that distinguishes between bald and non-bald. This is Kagan's famous threshold. But Kagan is not speaking about a binary transition; he is speaking about some perceptible stage in the progression relative to state 0, and prima facie his argument proves that.

Beyond that, even in Nefsky's formulation, one could infer from this exactly the opposite: that there must be some specific number of hairs at which a person ceases to be bald (or acquires hairiness to a perceptible degree). In such a case we have rejected assumption 1, and everything remains as it was. After all, that is exactly what Kagan did: he proved that there is a threshold. Nefsky, for some reason, simply assumes it is obvious that there is no threshold, and from there she derives a contradictory conclusion, full stop. Moreover, it may be that with respect to baldness it really is implausible that there is some specific number of hairs that constitutes a threshold. But does that mean that in every kind of problem of this sort there is no threshold? That is no longer a question that depends on the logic of the paradox but on the context to which it is applied. There is no obstacle to saying that Kagan is right with respect to pain even though with respect to baldness this is not true. To refute Kagan's argument we must point to a specific flaw in the logic of his argument. An analogous example whose conclusion seems absurd to us is not enough for that.[5]

Interim summary and a methodological remark

Kagan's conclusion is that even in these cases there is a threshold, and therefore all questions of collective action are threshold questions. Kagan too agrees that the only explanation one can offer for a moral prohibition in such situations, within the consequentialist picture (of act consequentialism), is Parfit's solution.

But in light of what we saw above, that solution is mistaken. If so, then even if Kagan is right and all collective actions are actions with a threshold, we still do not have a consequentialist explanation for their immorality. But even if Kagan is not right, that is, even if there are examples of cumulative collective actions (without a threshold), then we have an admission against interest (from Kagan himself) that Parfit's solution does not help with them and that there is no other solution (which is why Kagan needed his argument proving that there are no such examples).

The gloomy conclusion we arrive at is that, one way or another, there is no act-consequentialist way to ground a prohibition on an action within a collective framework. That is, according to the consequentialist there is no problem with polluting the air, participating in torture, evading taxes, voting for a Nazi party, consuming food from animals and causing the torture of animals, pressing a button on the torture machine in exchange for a token sum, or any other such example. If you are an act consequentialist, you must admit that in all these actions there is nothing morally wrong.

If we return to the general scheme presented at the outset, what exactly are Kagan and Parfit doing? It seems that they posit a theory (utilitarianism) and derive conclusions from it. Ostensibly this is a paradigmatic move of a divine soul, that is, rational movement from theory to applications, and the subordination of feeling to intellect. But this is an illusion. For they presuppose in advance the correct answer in every situation (that it is forbidden to take part in all these kinds of actions); that is, it is intuitively clear to them that in all situations of collective action there is a moral prohibition on the individual. But this does not fit with act utilitarian theory. What do they do? We are supposed to choose whether to reject the theory or reject our moral intuitions. But instead of rejecting the theory or rejecting the premise (that there is a prohibition in such an act), they look for a different mode of calculation that will fit the theory to our intuitions, so that one can remain with both utilitarian theory and our moral intuitions. And lo and behold, surprisingly, they do indeed find such a calculation. But there is a catch. As we saw in the previous column, these calculations are not correct. The anticipated gain (as distinct from the expected value of the gain) in almost all these cases is nil, that is, it does not outweigh the cost of not doing the act, even if that cost is not especially high (the time required to go vote, or the cost of buying an environmentally friendly product). If so, we are back at our point of departure: there is no consequentialist reason to go vote or not to pollute the air. Once again we are back at the dilemma, and this time there is no way out: either reject the theory or reject the intuitions. Whoever believes in those intuitions and forbids collective actions must give up consequentialism.

But my main criticism of the discussions conducted by Parfit and Kagan is entirely different. In my opinion there is in their work (and not only theirs) a conceptual and philosophical confusion as well, not only a probabilistic error. To present this, I need to begin with a preliminary remark.

Two types of utilitarianism

I want to distinguish here between two levels of discussion that many people conflate—the question of the basis of validity and motivation of the moral act, and the question of the definition of the moral act:

  • The basis of morality. The first question to discuss is: why be moral? What is the reason that morality is binding? Why do we make claims against a person who behaves immorally? In this context, utility, or consequence, is brought as a justification for the obligation to behave morally. If we do not behave morally, the world will be bad and we will all suffer; that is, the utility or consequence will be negative. Needless to say, this argument obviously depends on how one interprets the concept of utility (utility to me, to my community, or to the whole world? Material utility, psychological utility, or something else? More on this below). By contrast, deontological morality, whose clearest representative is Kant, holds that moral duty is not based on considerations of utility, nor on facts at all (grounding values in facts is the "naturalistic fallacy"). Moral duty is based on obedience to a categorical imperative that requires me to behave morally. The good is a reason unto itself and requires no justification outside itself, certainly not a justification in terms of consequences. This is a discussion in metaethics.
  • The definition of morality. The second question to discuss is what a moral act is. According to the utilitarian approach, a moral act is the act that brings the greatest utility (subject, of course, to the definition of utility, as above). There are other approaches according to which utility does not define the moral act. Morality may perhaps be understood as a mode of conduct that expresses the person's own human perfection, and not necessarily as bringing the greatest utility to him or to his surroundings (although even that can be defined as utility. Utilitarianism is not necessarily hedonism).[6] In any case, this discussion is an ordinary ethical discussion (and not a metaethical discussion like the previous one).

There is, of course, a connection between the two questions, although they are not identical. If the justification for morality is the utility in it, then it is reasonable to define the moral act as the act that brings us the greatest utility. But if the justification for morality is deontological, that still does not necessarily rule out defining the moral act as the act that brings the greatest utility: I should do it not because of the utility but because of the categorical imperative. But what should I do? Which acts does the categorical imperative require of me? Even the deontologist can agree to a consequentialist criterion: the act in question is the act that brings the greatest utility (see in Column 122 on the complex connection between deontology and utility, and more below).

In my opinion, the first question is a pseudo-question. Whoever sees utility as the binding basis of morality is not speaking about morality. If it is a matter of personal utility, then we are dealing with action for the sake of interest and not with moral action. But even if it is a matter of general utility, even for the whole world, utility cannot constitute the basis of moral obligation. The reason for this is the naturalistic fallacy, that is, the principle that one cannot derive a norm from facts. The fact that an act has some utility is a fact, and as such it is not sufficient to validate a moral norm. The validity of morality, by definition, must be based on a categorical imperative, that is, an imperative whose obligation is unconditional (by facts or by anything else. It is a categorical duty).

With respect to the first question, the deontological answer is the only possible one. Whoever says otherwise is mistaken (that is, he is not speaking about morality but about something else. See Column 251 regarding disputes about definitions, and Columns 223 and 248 on philosophical disputes).[7] On the metaethical plane, all moral theories must assume deontological morality, that is, morality whose validity is based on what is right (a categorical imperative) and in which the person who responds to it acts by force of obedience to the moral imperative. The only question that can be disputed is the second one: what does the categorical imperative require of me? The greatest utility (and that is utilitarianism), or perhaps something else, such as human perfection or conduct according to a law that I would want to be universal (that is Kantian ethics), and the like. This is the meaning of deontology on the ethical plane (and not the metaethical one).

But this distinction is even stronger than I have described. Think, for example, about Kant's ethics. Kant's criterion for the moral act is to do what I would want to become a universal law (see the fourth notebook, part 3, and Column 122). The question now arises: what is it, in fact, that I want to become a universal law? What is the criterion that determines whether some act is worthy of being a universal law and of being considered moral or not? Seemingly there are two possibilities here: a. there is another, deontological criterion, which I too consider worthy of becoming a universal law. b. the criterion is the greatest utility. It seems to me almost necessary to interpret Kant in sense b, for if the criterion for the act too is that it be good in some other sense (not a consequentialist one), then the categorical imperative, which comes to answer the question of what a moral act is, remains trapped within a circular definition. The good is what I would want to become a universal law, and what it is fitting to want to become a universal law is what is good. In other words, even after the categorical imperative, Kant does not really offer a definition of that good. So what have we gained from the whole discussion?! It is far more reasonable to argue that the criterion is the greatest utility for the greatest number of people (and for society as a whole). That is what determines whether I want such an act to become a universal law or not. Now the formulation is not circular: the good is what brings the greatest utility. So what in Kant's ethics is deontological? Very simply: what Kant innovates is on the first plane (the metaethical one): that the motivation for acting is not utilitarian but the categorical ethical duty as such.

Think, for example, about the calculation Parfit made (in the previous column). We saw that in his view the utility of voting in elections is ten thousand dollars. But who gains that profit? The public as a whole. So why should I, as an individual voter, go vote for that? I myself will gain only about five hundred dollars (a hundred dollars a year for about five years), multiplied by the chance that this will happen—and that chance is negligible. Now not only is the chance of receiving the gain negligible, and not only is the anticipated gain (which is not the expected value, as we saw in the previous column) negligible, but even the expected value of the gain in this case is negligible. From the standpoint of consequentialism for a private individual, I have no moral duty to vote. The determining sum is not ten thousand but less than a cent. Note the significance of this: contrary to Parfit's claim, in an act-consequentialist picture the large number of citizens plays no role in the calculation of utility. From this you can understand that in the subtext of Parfit's calculation there really lies a deontological approach. Utilitarianism measured as the gain of the public as a whole is not really utilitarianism but deontology in utilitarian guise. Once again we encounter the same conflation between utilitarianism as moral motivation and utilitarianism as the criterion for defining the moral act.[8]

It is worth noting the significance of the distinction I have presented here. First, it almost completely erases the dispute between deontologists and consequentialists. On the plane of motivation, the deontologists are clearly right, and on the plane of defining the moral act, the utilitarians (in their various versions) are clearly right. This picture is nothing other than what philosophers call rule consequentialism rather than act consequentialism. In the first column I explained that rule consequentialism determines the moral act not according to its outcomes but according to the character that performing it would create in society at large (if everyone acted that way). I said there that this is really covert deontology (because the private individual still acts not for the sake of the utility that he himself will derive, and therefore act consequentialism cannot give him a reasonable rationale to act that way). In light of what I have written here, it is hard to miss the fact that although this approach is classified as a kind of consequentialism, it is really one description or another of Kant's deontological doctrine. This is the place to mention again my columns that deal with philosophical disputes (223, 248, and 251). Here we encounter another example of a dispute that is waged with great intensity, but in truth it is doubtful how far there is any dispute here at all, and insofar as there is, it is clear that one side is right and the other mistaken (this is a conceptual confusion).

This dispute, like many other philosophical disputes, is illusory. There is one correct answer here, and all the others are philosophical mistakes that necessarily generate mathematical mistakes as well (in the calculation of utility). When one tries to square the circle, that is, to create a consequentialist theory for collective cases, one gets nonsense. One cannot square the circle, but the nice thing is that, as we have seen, there is no need to do so at all. From the outset, it is not a circle but a square. We are all the time in the deontological sphere, and therefore there is no need to seek consequentialist solutions.

The ethical conclusion from this whole line of thought is that moral obligation cannot be grounded on consequentialist considerations. The matter is like squaring the circle: either one gives up moral intuitions or one gives up consequentialism. Whoever is prepared to live with the conclusion that there is nothing morally wrong with voting for the Nazi party, contributing to the torture of a person or to air pollution, evading taxes, and the like, may remain a consequentialist. But one cannot have one's cake and eat it too.

And yet, a dispute remains

Admittedly, it is difficult to say that there is no dispute here at all. After all, there is a clear difference between deontology and consequentialism in their evaluation of the moral act. For example, in the question whether there is any difference between an attempted murder that failed and a murder that succeeded: the deontologist sees no difference because the motive was identical, whereas the consequentialist judges by the outcomes. If so, seemingly there is a real dispute between them.

But this is a mistake. The deontologist too takes the consequences into account in his judgment, but only the consequences that were expected at the time of planning and carrying out the act (for that is what determines the act that I want to become a universal law, as I explained above), and not the consequences that actually occurred. And conversely, if the consequentialist takes the actual consequences into account, he is simply mistaken. This has no importance whatsoever on the moral plane. It certainly has importance on the plane of a person's responsibility for the consequences of his act, as I explained in Columns 67 and 229.[9] Incidentally, in Column 229 several of the points that arose here also came up (strained explanations due to being trapped within a problematic framework of thought, as we saw here with Parfit and Kagan, the contrast between specific feelings and theory, and more).

An implication for our discussion: in truth there is no discussion

But if I am right in this distinction, then the discussion of Parfit and Kagan that I have described thus far is completely drained of meaning. I explained that their discussion comes to answer one and only one question: how can a theory of personal utilitarianism (act consequentialism) ground an obligation to act individually in a situation where the results arise only from the actions of the collective? What can motivate a person to act in this way if, from his standpoint, his act has no consequence? Their assumption is that rule consequentialism is not an explanation for the individual's obligation to act in such a way, and in my opinion they are entirely right about this.

But as we saw in the previous section, everyone agrees regarding the metaethical question, namely that the warrant for morality and the person's motive for performing the moral act are deontological and not consequentialist. The debate over utilitarianism can therefore be conducted, if at all, only on the ethical plane: how to define the moral act that deontology obligates. But on that plane there is no problem at all even with rule consequentialism, or with the general consequence of a private act (as in the case of elections), and there is no reason to resort specifically to act consequentialism. Why should there be any problem with a definition according to which each person acts by force of a deontological law (and therefore he is obligated to do his act even though in isolation it has no consequence), but the act that he is supposed to perform is defined as the act that brings the greatest positive collective utility, or minimizes the negative collective utility? Any deontologist can accept this. The fundamental problem that Parfit and Kagan came to confront does not exist at all. They speak about consequentialism on the metaethical plane and therefore see a problem here, but no such consequentialism exists (this is not a moral theory). And on the ethical plane there is no dispute at all over whether the criterion is consequentialist or not. True, if one looks for a per-person outcome as they do, one gets a negligible result and we have no explanation for the ethical law that forbids it. But once we have arrived at Kant, one can simply use the categorical imperative itself and tell the person that he must not do an act whose per-person consequence is nil, because that way of acting is the one fit to become a universal law (and as we saw in Column 122, it is also the only one that brings about the consequences).

I will mention here that in Column 122 I already showed that the distinction between deontology and consequentialism is in fact blurred, and perhaps does not exist at all. I showed there that achieving collective results is possible only in a situation in which every individual acts not out of consequentialist (personal, that is, per-person) considerations, but deontological ones. You can now see that this is exactly the picture I described here. This is where both Kagan and Parfit err, along with many others who deal with this issue. There is no need whatsoever to define consequentialism in terms of a chain of actions, or collective consequentialism (of a collection of individual acts). From another angle we see here what I argued there: that the only and obvious solution to the problematics of collective consequentialism is the Kantian solution. The meaning of this is that rule consequentialism is not consequentialism but a deontological theory. And once we are dealing with deontology, there is no obstacle to fully adopting Kant's ethics.

An additional note

I will conclude with a remark that really belongs more to the beginning of the previous column, but after distinguishing here between the two planes of discussion (the ethical and the metaethical) I can present it more clearly.

I wrote there that the moral feeling regarding a certain situation can be interpreted as an emotion or as a moral intuition. I argued that emotion ought not to influence moral decisions, but only the intellect. But if it is an intuition, there is justification for taking it into account when we come to shape our moral theory and the specific ethical decision for that case, because intuition belongs to the intellect.

Now one can see that in the example I brought in that column of such a clash between theory and feeling there is quite a sharp illustration of this distinction. I mentioned there the situation in which a group of people becomes trapped on the summit of a snow-covered mountain and cannot come down. Their food has run out and they must decide whether to slaughter one of them (to be chosen by lot) and eat him, or to leave everyone to die. It seems to me that a common approach among philosophers, and among other people too, regarding this situation is that there is a prohibition on slaughtering the person. And in the language of the Talmud (said about a completely different case): Better that they all die than that one see his fellow's death (better that they all die than that one see his fellow's death). From this, we are then supposed to revise the utilitarian theory that says one should slaughter him, because the maximum utility is obtained by killing one of them. At least the rest remain alive.

I think this is an excellent example of an emotion that is not a moral intuition. On a rational calculation, if the two alternatives are either that everyone dies or that one dies and the rest are saved, the second option is clearly preferable. That one who dies would have died anyway, and the gain is that at least the others are saved. Moreover, when we ask each of the people, it is clear that he himself would prefer such a lottery, because it gives him a (very good)[10] chance of being saved, whereas without it he will certainly die (together with everyone else). This was precisely the subject of my article on separating Siamese twins, and there too I argued the same thing (against the opinion of all the halakhic decisors). And yet I am not convinced that if I ever actually find myself in such a situation, I will stand up to it—that is, that I will succeed in carrying out the decision to slaughter the one on whom the lot falls. It is very hard for us to murder a person. But in my view this is only an emotional bias and not a moral intuition, for morality is determined by reason and not by emotion, and reason tells us to slaughter him. This is a good example of the distinction between emotion and intuition.

And again I should note that this consideration itself, which ostensibly is utilitarian (the greatest utility for the greatest number), does not contradict a deontological approach. As we saw above, even in a deontological approach, the consideration of what state of affairs I would want to prevail as a universal law is a utilitarian consideration (but rule utilitarianism, that is, the calculation of utility is made with respect to the public as a whole). That is exactly what I did here. Therefore this decision is correct both according to the deontological approach and according to the consequentialist approach.

In the previous column I presented this as a conflict between the moral feeling regarding the specific situation and the theory, and I made the mode of conduct depend on the question of the divine soul (subordinating feeling to theory) or the animal soul (subordinating theory to feeling). Now one can see that this is indeed a matter of emotion, and therefore the consequentialist theory (but also the deontological one) that instructs us to slaughter the one in order to save the rest should remain in force. We must subordinate to it our feeling, which makes it difficult for us to carry out the decision.

In the next column I will point to a perspective from Jewish law on the discussion we conducted regarding collective actions.

[1] As noted above in the discussion of Parfit, in my opinion this solution is plainly mistaken, but here I am only describing the course of Kagan's argument.

[2] Clearly there is a very small change, but pain is what the person feels, and therefore if he answers that it does not hurt him more after the additional button, then from our standpoint the additional button adds no pain. The pain that it is forbidden to cause depends on subjective sensation.

[3] Nefsky, J. (2011). Consequentialism and the problem of collective harm: A reply to Kagan. Philosophy & Public Affairs, 39(4), 364-395. Again, thanks to Noam Oren for the reference.

[4] A methodological note. This reminds me of Kant's critique of Anselm's ontological argument (see my first notebook). Kant does not point to the flaw in Anselm's argument, but only to the absurdity that emerges from accepting it. This is a very problematic methodology, since one can infer from it exactly the opposite: if the argument is valid and no flaw has been found in it, then the conclusion that seems absurd to Kant may perhaps not be so absurd. To refute a logical argument one must first and foremost point to a flaw contained within it. The same applies to Nefsky's critique of Kagan.

[5] Especially in light of my remarks in the previous note regarding this kind of critique.

[6] See Column 236 for the discussion of the meaning of the commandment of charity and the implications brought there.

[7] See on this in the third part of the fourth notebook.

[8] From here one can understand the terminology I used in the previous column. I wrote there that the dispute between utilitarians and deontologists is a metaethical dispute. On this I received an email comment that I was not precise, since among philosophers this is actually the clearest example of an ethical dispute. I can now explain what I meant. Indeed, if this dispute revolves around the second plane, namely what the ethical act is, then it is an ethical dispute. But in the previous column I dealt with the first plane, moral motivation (or the source of validity of the moral demand), and there the dispute is metaethical (it deals with the background and foundation of the ethical obligation, not with the ethical obligation itself).

[9] I recall a discussion on the site about moral luck in which this point arose sharply and clearly, but I cannot find it at the moment.

[10] If there are ten people in the group, he has a 90% chance of being saved.

השאר תגובה

Back to top button