חדש באתר: NotebookLM עם כל תכני הרב מיכאל אברהם. דומה למיכי בוט.

A Mathematical Look at Consequentialist Ethics – Parfit (Column 252)

This is an English translation (via GPT-5.4). Read the original Hebrew version.

With God's help

In the next three columns I would like to revisit the question of the character of morality (consequentialism versus deontology). I will do so from the angle of the issue of collective actions. I will critically follow arguments that arise in this context, and will also try, through them, to shed a bit of light on the methodology and the general framework of ethical discussions in philosophy. In this column I will discuss Parfit's treatment, which as far as I know was the first to address the topic systematically. In the next column I will move on to Shelly Kagan's article, which continued and developed the discussion, and in the concluding column I will address the subject from the perspective of Jewish law. I will post all three columns over the course of the coming week, so take that into account in your plans.

Arguments in moral philosophy: intuition, emotions, and theory

In my book, Enosh KeChatzir (in the first Hasidic intermezzo, p. 205), I discussed the nature of arguments in moral philosophy, and associated them with what the author of the Tanya calls the "animal soul" (not necessarily with negative connotations. Obviously, the point is not that we are speaking about animals). The background to the discussion was his distinction between the animal soul and the divine soul. People of the divine soul are led by the intellect, and it is the intellect that dictates their behavior and their ethical decisions, bending the emotions to it as well (even when they oppose the action). In a person of the divine soul, the heart is subordinate to the head (brain, heart, liver). By contrast, among people of the animal soul, moral emotions dictate their decisions and their ethical theories. For them, the head is subordinate to the heart (heart, brain, liver).

There I illustrated this through the conduct of typical discussions in moral philosophy. A moral theory is a system that is supposed to guide us and dictate our decisions in specific cases. Ostensibly, in order to know what to do in any situation that raises a moral dilemma for us, we ought to apply the theory to that situation. We would expect the theory to determine what is right and what is not (otherwise, what is the theory for?!).

But in practice that is not what happens. Usually, the formulation of a moral theory is done by testing it against various cases (like a scientific theory against specific experiments). For example, if we want to examine the utilitarian theory, we invent a hypothetical case (or a real one) that will put it to the test. Suppose, for instance, that a group of people are caught in a snowstorm and become stranded on a mountaintop from which they have no way to descend. They have no food, and therefore they must decide whether to slaughter one person among them and eat him, or take the risk that they will all die. Let us assume that considerations of maximum utility are supposed to lead us to the conclusion that it is preferable to slaughter one and eat him. That is the conclusion that follows from the theory (one might perhaps argue about it, but this is only an example, so for present purposes I will assume that it does). But in many cases the conclusion reached in the philosophical discussion is the opposite. The philosopher argues that if this is indeed the conclusion of utilitarian theory, then we must revise the theory so that it yields the "correct" conclusion, namely that one may not slaughter anyone and it is preferable that all of them die. What is the assumption underlying this way of proceeding? How do we know that we really may not slaughter the one so that everyone else can eat? What is this "correct" conclusion? After all, what emerges from our moral theory is the opposite. Is that not the very theory that is supposed to guide us?

It seems that in discussions of this sort, the moral feeling that rebels against such a step dictates to us that this is the moral truth, that is, that this is the "correct" action, and therefore we impose it upon the theory. After a large number of such thought experiments, we build a theory that fits all the examples brought before it, and in all of them yields the "correct" results. The outcome of the discussion is a "correct" moral theory, because it accords with the "correct" decisions as we see them. But if this is indeed how things proceed, then the theory does not really guide us in our moral decision-making; on the contrary: moral feeling dictates the theory. The theory is nothing more than giving the form of a general theoretical structure to a collection of feelings regarding a collection of specific examples that have been examined. In such a mode of conduct, the arguments that arise in the course of formulating the theory do not assume a moral theory and derive conclusions from it, but assume conclusions and derive a moral theory from them.[1] Moral feeling is what dictates our conduct, even though this is done in a manner that appears, ostensibly, highly rational, logical, and consistent. It looks like an operation of the head, but in truth this theory is merely an overall structure imposed on our gut feelings. I argued there that a person of the divine soul (who is guided by the intellect), when he encounters such a case, ought to apply his rational principles and decide on that basis what is appropriate to do. His theory is what determines the decision in the specific case.[2]

Still, as I noted later on there, this description is overly simplistic. I assumed here that the conclusion that naturally arises within us when confronted with the situation is a result of emotion, but it may very well be a moral intuition, and that is not necessarily emotion. We are speaking about an intuitive sense of what is and is not morally proper, and that sense is a kind of compass that directs our moral inquiry, both in practical decisions and in constructing moral theories. The difference is crucial, because on this description the theory is a systematic and general garment for moral intuitions and not for mere emotion. In several places (such as in my books Shtei Agalot and Emet Ve-lo Yatziv) I have argued that intuition is part of the intellect and not of emotion.

In these columns we will see an example of a typical moral discussion dealing with utilitarianism. Beyond the importance of the discussion itself, I will also use it to illustrate the form of debate I described, its limitations, and some of its implications. I will follow the arguments of Derek Parfit, in the third chapter of his well-known book, REASONS AND PERSONS,[3] and afterward I will review Shelly Kagan's article, "Do I Make a Difference?",[4] which relates to him.[5]

Introduction: utilitarianism and deontology in morality

One of the most fundamental disputes in metaethics and moral philosophy is the dispute between utilitarians (a morality of utility, consequentialism) and deontologists (a morality of intentions). The question discussed by Parfit and Kagan is what the nature of this utilitarianism is. Is it utilitarianism of a single act or of a moral rule? Let me add in advance that the utilitarian approach is consequentialist in its essence, since within it good and evil are determined according to the results.[6] Therefore, despite the distinction below, I will use the two concepts (utilitarianism and consequentialism) interchangeably.

Do I Make a Difference?

In the third chapter of his book, Parfit discusses several mathematical fallacies in ethics. His discussion focuses on the question of what a person's obligation is when his actions bring benefit when they are done as part of an entire group, but his personal act, taken in isolation, has no consequences. Is he still obligated to act that way? This is the subject of collective actions. Such a situation poses a challenge to the utilitarian view, since the consequential utility of his act is negligible or does not exist at all, but the results of the actions of the whole group can be highly significant. Such cases include voting in elections (my personal vote is not significant in any way. The chance that I will make a difference is negligible; in Israel it is something like 1 in tens of thousands, and in the U.S. much less), evading income tax (my personal tax payment is meaningless in the state treasury), air pollution (my contribution to the general pollution is negligible), consumption of food from animals (veganism and vegetarianism; again, the effects of the single person on animal suffering in the industry are negligible)[7] and the like.

In such cases, it is clear that at the collective level there is a consequentialist obligation to act in a very particular way, but the question that interests us is why the individual must obey this collective obligation. That is, it is clear that consequentialism too reaches the conclusion that one should establish a rule not to evade taxes and not to pollute the air, etc. (this is what is called 'rule consequentialism'). But it is not clear why the individual ought to obey this rule when, from his standpoint, there is no consequentialist justification for doing so (this is what is called 'act consequentialism').[8] Act consequentialism is the assessment of a person's action in light of its own results alone (utilitarianism seeks the action that yields the best results). Rule consequentialism means establishing a rule that yields the best results at the social level. Everyone agrees that the rule should be not to evade taxes and not to pollute the air, etc. But on the private plane, which discusses the results of the actions of a private individual, it is hard to see a justification for saying that a private individual may not pollute the air or evade taxes, since his acts have no result (certainly no noticeable result).

I have already dealt with this question in several places. See, for example, chapter 9 of the fourth booklet, in my article on the categorical imperative in Jewish law (see another angle in the column after next), and in columns 13 and 122. I explained there why claims such as "What would happen if everyone acted as you do?" have no logical force (because not everyone will act as I do, or I will not reveal it to them, and then their decision is independent of mine). My conclusion there was that such cases challenge the utilitarian view, because according to it, if there are no bad consequences then there is no moral prohibition.

On the other hand, you are surely wondering what the difficulty is: נכון, in such a situation there really is no moral obligation or prohibition, and that is that. What is the problem? The problem is that the intuition of many people is דווקא that there is a moral obligation even in such a situation (an obligation to vote, a prohibition on tax evasion, a prohibition on polluting the air). Moreover, at the collective level (rule consequentialism) there is certainly a moral obligation not to pollute the air. But if every individual does not fulfill that obligation, then of course it will not be fulfilled at the collective level either.

You can already see the dilemma described in the section before last: should we force the implementation of the theory and reject those feelings in its face (divine soul), or alter (revise) the theory because of those moral feelings (seemingly animal soul, with the reservation noted above)? As I explained above, almost all moral philosophers take the second path.

Parfit and Kagan: point of departure

Parfit and Kagan discuss these questions, and essentially ask: Do I Make a Difference? My actions change nothing, so why do them (or refrain from doing them)? In such situations it is customary to think that according to the consequentialist approach there is no obligation to behave altruistically, and therefore a person may act according to his own interests (evade taxes, pollute the air, etc.).[9] In the background stands the feeling that it is self-evident that there is nevertheless something defective in such acts, and their question is how this can be justified within a consequentialist framework.

You can see that both of them begin from a point of departure that does not call consequentialism into question. It is clear to them that it is the correct ethical approach. That is their ethical theory. But to the same extent, both also take it as self-evident that there is a moral obligation on the individual person in such situations; that is, they are unwilling to give up their moral feelings, according to which even in these situations there is a prohibition on the private individual. If so, deontological solutions are not an option for them. Therefore they offer the only possible solution: they show modes of calculation according to which even on a consequentialist approach these actions involve prohibitions or obligations for the individual. That is, they show that the assumption according to which, in such contexts, the individual act has no effect, for good or ill, is incorrect. Their claim is that the individual's act does have consequences, and therefore even in such cases act consequentialism imposes upon him a moral obligation or prohibition.

Parfit

Parfit argues that the reason we do not see relevant consequences to a person's actions in such situations is nothing more than a collection of mathematical mistakes (in calculating the utility of the act). He presents five types of such mistakes, and here we will focus on three of them, which gradually approach the case of collective actions.

The first type (section 25 there)

One hundred miners are trapped in a mine during a flood. They can be brought up by an elevator that is raised by weights placed on a parallel platform. If I and three others stand on the platform, that will save all the trapped miners. If I do not join the three others, I can go and, in the meantime, save the lives of ten other people by myself. What should I do? But that is not all. There is another, fifth, person who can join the three others in my place and save the miners.

If all five stand on the platform and save the miners, then it is reasonable to say that I myself save twenty of them (one-fifth). Is that not preferable to saving ten others? Clearly not. From a consequentialist perspective, although I personally thereby save more people, the overall result is worse (the ten others will die, since the other four can save the hundred trapped miners without me). Notice: at the level of rule consequentialism this is clearly a required act (because the overall situation will certainly be better that way), but at the level of act consequentialism it is not clear why I am obligated to do it. To ground this moral obligation at the level of act consequentialism, we must define the results of my action differently. We must ask not how many people I saved by each course of action, but what the overall results will be of my individual acting versus not acting. Notice that we are speaking about the overall results of my individual act. Therefore this is not an example of a collective action like those mentioned above.

Now think about a situation in which there is no fifth person. I and three others can still save the hundred trapped miners, and thus each of us will save 25 (a quarter) of them. But now the situation is slightly different from what I described קודם: instead of joining my three friends, I can go and save fifty other people by myself. On the one hand, here too it is clear that this would not be right, because the overall result would be worse. In such a case, at least de facto, each of us saves all one hundred (and not only 25). And again, in order to formulate this in the terms of act consequentialism rather than rule consequentialism, we must define the consequences of my action in the way we saw before: not how many I save under each course of action, but what the difference is between the overall results of my acting and my not acting.

The conclusion is that at least with respect to these examples, one can formulate our ethical theory in the terms of act consequentialism, and there is no need to speak about rule consequentialism. This is so if we assume that the consequentialist consideration is not a comparison between the direct results of my individual act, but a comparison between the overall results of my individual act. We are still measuring the consequences of my personal action and not those of the collective.

The second type (section 26 there)

Think of the following case: Reuven and Shimon both shoot Levi and kill him. Each shot alone would have killed him. If so, each of the two shooters can say to himself that even had he not fired, Levi would have died anyway, and therefore his act did not bring about a bad result. Hence, on consequentialist grounds, the act is not forbidden. Therefore neither Reuven nor Shimon acted in a morally defective way, despite the fact that their acts somehow collectively killed Levi.

Here too the clear feeling is that such an act cannot be permitted. The bad result was caused by both of them together, although it cannot be attributed to either one of them separately. Notice that here we are no longer dealing with the collective result of an individual act, as we saw in the first type (after all, from Reuven's standpoint and from Shimon's standpoint, had I not fired nothing would have changed). If so, here we are indeed not dealing with the result of the collective action, but also not with the result of an individual action. Can this moral principle be formulated in terms of act consequentialism rather than rule consequentialism? Seemingly not. Only if the pair is regarded as one unit (a collective) can one say that a consequentialist prohibition applies to the pair against shooting. But that is really rule consequentialism (a collective consideration) and not act consequentialism. Here we are already advancing toward the direction of a result of collective action, where each of the individuals performs an action that has no problematic outcome at all. This is precisely the subject of our discussion here. And yet this is not really an example of collective action, because the murder is not the sum of their actions, but the result of each of them separately. Each of them performed a complete act of murder, not a partial one.

Now think of a second case: Reuven gives Levi poison that will kill him painfully within a few minutes. A minute later, Shimon comes and kills Levi on the spot without pain. On an act-consequentialist calculation, Shimon did nothing bad, for Levi too would certainly prefer to shorten his life by a few minutes in order to avoid the suffering. But Reuven too did nothing bad, since in the end his act did not cause Levi's death and certainly not his suffering.

Not for nothing does this recall the story about Hershele and the rolls. Hershele, long may he live, walks into a bakery and orders doughnuts. After receiving them he changes his mind, returns them, and asks for rolls instead. After finishing the rolls he leaves the shop and goes on his way. The baker chases after him and demands payment. Hershele looks at him in astonishment: "What should I pay you for?" he asks. "For the rolls you ate," comes the answer. Hershele argues: "But I gave you the doughnuts in exchange for them." The baker does not give in and says, "But you did not pay for the doughnuts," but Hershele is not flustered, and immediately says: "But I did not eat those, so why should I pay for them?!"[10]

At this stage, Parfit concludes that there is no escape from defining consequentialism in terms of a collective and not of an individual action. He proposes a definition according to which an action is problematic even if it causes harm only when it comes together with additional actions. One may still see this proposal as a formulation of act consequentialism rather than rule consequentialism, since here we are determining that an individual action causes harm (except that it does so with the aid of other actions). This is not like rule consequentialism, because there what prevents the harm is the establishment of a rule for a collective and not a prohibition on an action (even a chain action, as here). I will only remark that such a definition can explain the prohibition on Reuven's act. But I doubt whether one can derive from here a prohibition on Shimon's act (Parfit claims that one can, and I do not understand why. This is basically euthanasia).[11] In the case of the two shooters, the situation is even further removed. As I already explained, there it is not a collective harm at all (each one separately would have caused the full harm), and therefore in that case it seems to me that there is no escape from rule consequentialism.

In parentheses I will add that there may perhaps be a way to prohibit Reuven's act even within the framework of act consequentialism of an individual action. For this purpose one may use the terminology of the Talmud in Bava Kamma 17b. The Talmud there deals with someone who threw a vessel from the roof, and another came along and, before it reached the ground, broke it with a stick. Seemingly here too nobody did anything. The Talmud discusses whether perhaps one can obligate the first (or exempt the second) on the grounds that the vessel is considered broken from the moment it left the roof. That is, throwing it from the roof immediately transformed it into a broken vessel. In the discussion on 26b there, they speak about throwing a baby from the roof and someone else coming and killing him with a sword. There too the possibility arises of viewing the baby as dead already from the moment he set out on his way from the roof. That is already very similar to our case. Therefore one can say in our case as well that the person who gave Levi the poison in fact killed him immediately. He is already considered dead from the moment he drank it. I will only note that in the Talmud this reasoning is not really accepted. At least according to the conventional explanation, it serves at most to exempt the second, but it is not enough to obligate the first. But this is a possible line of reasoning for prohibiting Reuven's act on the basis of act consequentialism regarding a single action. As for Shimon's act, however, even here it seems to me there is no room to prohibit it on such grounds.

To support the thesis that sometimes consequentialism must relate to a chain of acts (by different people) and not to the act of a single person, he brings dilemmas of the Prisoner's Dilemma type, in which the optimal benefit is achieved by coordination between the players and not by the calculation of a single player. That is, even if each of the two players makes a calculation of the individually optimal outcome from his own standpoint, the result does not give us the optimal benefit that can be obtained from the situation. To attain that, a coalition is required (without direct coordination). In column 122 I dealt precisely with such situations, and there I pointed out that such a situation ties consequentialism to Kantian deontology. True, there the solution is deontological and not consequentialist, and we shall return to this point later.

The third type (section 27 there)

Think of situations in which each player separately causes negligible harm, but the combination of them all causes significant harm. This is the issue of collective actions, like the cases of air pollution, elections, or tax evasion, which are the subject of our discussion.[12] Parfit argues that here too three further mathematical mistakes are common. Here I will describe only the most basic of them (the other two are very similar).

The first case is situations in which the single action has a very significant result with a very low probability. This is precisely the example of elections.[13] The vote of a single citizen has no effect at all unless his vote completes exactly the number of votes received by the party he prefers to an integer number of seats (that is, that the total number of its voters is divisible by the number of votes per seat). The chance of this is of course negligible (in Israel it is something like 1 in forty thousand). Think of U.S. presidential elections, where there are two candidates. The vote of a single citizen has essentially no chance of affecting the result. The only situation in which it has an effect is if, without him, an exact tie would result. The chance of that is tiny, of course. Parfit argues that the accepted estimate is on the order of about 1 in one hundred million (roughly half of America's voters). In my opinion he is mistaken, and the chance is much greater,[14] so for present purposes let us assume it is 1 in ten million. In any event, this is of course much lower than the chance I described above of affecting a party's strength in elections in Israel. Seemingly, the expected utility from voting in elections, certainly in the U.S., is very low.

But actually, it is not. After all, the utility of his vote is the expected value of the utility (the expected, or average, utility). And this is nothing but the probability of influencing the result multiplied by the magnitude of the utility if that probability is realized. Suppose our voter supports candidate A (he is preferable in his eyes to candidate B). The reason A is preferable in his eyes is that if he is elected, the fate of many Americans will be significantly better; and even if there are some who lose out, that is negligible compared to the benefit that will accrue to the vast majority of the others. For the sake of simplicity, let us ignore the effect on non-Americans (which of course also exists) and formulate this utility in terms of GDP (assuming for present purposes that ideological gains, the national mood, morality, corruption, and the like are all included in it. This is an economic expression of the value of the total gains from all the relevant considerations): the election of A will yield a significant increase in GDP, let us say, by a cautious estimate, 100 dollars per American per year, that is, 500 dollars over five years (let us assume this is one presidential term). The total utility that results from that voter casting a ballot is therefore given by the following formula: P X N X m, where N is the number of Americans (not only the voters, but the entire population who will benefit), m is the gain per American, and P is the probability that this will happen. As noted, P is about twenty divided by the number of Americans (1 in ten million). The result is on the order of 20m. Assuming that m is about 500 dollars per person, the expected utility of his vote is around ten thousand dollars.

Does that seem trivial to you? If we assume that the cost of going to vote is negligible, then the expected value of the utility (the expected gain) for each voter is very large (certainly not negligible), and for our purposes it is enough that it be positive in order to establish a moral obligation to do so. In such a situation there is no moral dilemma about whether or not to go vote. The same is true, of course, with respect to problems of air pollution (the small harm that each isolated action of one of us causes to every other citizen must be multiplied by the number of citizens harmed).

In other words, Parfit argues that our mistake regarding these cases is that we think we may ignore events of low probability, but we forget that those probabilities have to be multiplied by the number of citizens who gain that small benefit. We ignore the significance of that utility for all citizens and look only at the low probability that we will obtain it. Parfit claims that this is analogous, in his view, to nuclear engineers who would ignore a small probability of damage in a reactor. Since the expected damage is enormous, they may not ignore even small probabilities. His conclusion is that even on an act-consequentialist calculation one ought to go vote and it is forbidden to pollute the air. The problem does not exist, and the utilitarian view is not challenged by this type of problem.

Well, actually Parfit is the one who is wrong here, and badly. First, even according to his own approach, the expected gain must be such that it exceeds the cost of the action itself. There will be quite a few cases in which this will not yield a positive result (the price of vegetarian or vegan food, and add to that the suffering involved in eating it, are not necessarily lower than the not-so-great utility that my individual action will bring to all animals). In the case of elections, and perhaps in several other cases, this may indeed be true, but there are cases in which his solution will not work (Parfit somewhat understates the aspect of cost and focuses on magnifying the utility). But as we shall now see, contrary to Parfit's claim, even in cases where his solution seems at first glance applicable (such as elections), it is not really correct. Even in such cases one may ignore very small probabilities, and the expected-value calculation is not the relevant consideration.

Before I explain this, I need to present a charming paradox that is usually taught in courses given to stock-market investors on risk management, the St. Petersburg paradox.[15] This paradox provides a simple example of a problem in economic valuation, in which the criterion of expected value leads to blatantly incorrect results. But before that I will preface it with another amusing anecdote, this time from the school of Blaise Pascal.

Pascal's wager

The French philosopher and mathematician Blaise Pascal offered a probabilistic consideration that was supposed to lead every person, regardless of his worldview, to believe in God and observe the commandments. Think of an atheist who claims that, in his opinion, the probability that God exists is negligible, and therefore he does not believe in Him and also sees no point in serving Him and observing His commandments. When he is threatened with the punishments he will receive because of the transgressions he has committed, he says that he is not moved by this, because the probability that it will happen is very small.

Against this atheist, Pascal made the following argument: if God exists, then the expected gain from observing the commandments is immense (endless pleasures in the World to Come over an infinite duration, eternity), and the harm (that is, the torments of hell) in not observing them is likewise immense and extends along the entire time axis. By contrast, if God does not exist, then observing the commandments brings fairly little harm (it is simply unnecessary, but not truly terrible), and committing transgressions does not bring a very significant benefit. The harm or benefit is fairly small. Now let us assume, for the sake of the discussion, that same atheist's assumption, according to which the probability that God exists is negligible.[16] When you calculate the expected gain of faith and observing the commandments in this picture, you have to multiply the probability (which, according to the atheist, is very small) that God exists by the enormous gain that observing the commandments will bring him, and subtract from that the product of the probability that God does not exist and the (minor) harm that such observance will bring him. When you calculate the expected gain of not observing the commandments, you have to multiply the probability that God does not exist by the pleasure involved in the transgressions, and subtract from that the probability that God exists multiplied by the dreadful suffering that will be imposed on him because of the transgressions. Pascal claims that even on the atheist's assumption (that the probability of God's existence is very small), it turns out that the expected gain of observing the commandments is incomparably greater than the expected gain of not observing them. So even under these assumptions, it is far preferable for him to observe the commandments.

There are quite a few objections to this wager, but in my opinion the main objection to it is missed by almost everyone who has dealt with it (at least those known to me). The wonder is that this is a probabilistic error, and Pascal, who besides being a very devout religious Jew—well, not Jew, but quite devout—was also one of the fathers of probability theory, should not have stumbled over it. At the end of the second chapter of my book, Elohim Mesachek BeKubiyot, I explain this mistake by means of the St. Petersburg paradox.

The St. Petersburg paradox

Think of a lottery that is offered to you, which proceeds as follows: a coin is tossed. If the result is heads, you receive 2 NIS and the process stops. If tails comes up, you receive nothing and the coin is tossed again. If heads now comes up, you receive 4 NIS and the process stops. If tails comes up, you receive nothing and the coin is tossed again. Thus the process continues, as the sums rise each time according to powers of 2. How much money would you pay for a ticket to such a lottery? Seemingly, we ought to calculate the expected value (the expected gain), and that is what should determine the profitability of the transaction. If I am offered a ticket at a price lower than the expected gain, it is worth buying it, and if not, then not.

But a simple calculation shows that the expected value of this process is infinite. I have a 1/2 chance of earning 2 NIS, a 1/4 chance of earning 4 NIS, and a 1/8 chance of earning 8 NIS, and so on. The expectation is the weighted sum of all these possibilities, namely:

M=(1/2)X2+(1/4)X4+(1/8)X8+…=1+1+1+1+1…-> ∞ infinity

We have obtained that the expected value here is infinite. Surprising as that may be, this is indeed the expected gain in such a lottery.

Now I ask a practical question: in practice, how much would you be willing to pay for a ticket to participate in this lottery? Anyone who does the math for himself and understands that the chance of getting more than a few dozen shekels is negligible (the chance of receiving more than 32 NIS is about 3%) will, I assume, agree that a rational person would not pay more than 100 NIS for such a ticket.

Back to Parfit

So where is the bug? Very simple. This calculation yields the average expected gain per lottery, that is, the gain per lottery under the assumption that we run many lotteries (indeed infinitely many). But in a single lottery there is almost no chance of receiving a gain above 100 NIS. Someone who really loves risks might pay 1,000 NIS here, if we go very far. No one will pay a billion NIS here, even though that too is far lower than the expected value of the gain (which is infinite).

Notice that I have now replaced the expression "expected gain" with the expression "expected value of the gain." The expectation is a mathematical concept that refers to a situation in which one conducts countless lotteries (that is, countless games, each of which contains a chain of tosses until it stops). But in situations in which the result of the expected-value calculation is a sum whose probability of actually being obtained is very, very small (as in our case), it is incorrect to see the expectation as a measure of the expected gain, that is, as the real value of the ticket. To see this more simply, think of another lottery: I offer you a ticket to a lottery in which a biased coin is tossed. The chance that it will land heads is 1 in a million, but the gain from heads is a million trillion dollars. How much would you pay for such a ticket? Seemingly, a billion dollars, since that is the expected value of that lottery.[17] But in such a situation, the chance that you will lose the billion dollars is basically 1. That is, you have essentially thrown those billion dollars into the trash.

This is, in my opinion, the main reason Pascal's argument is not correct. Even if the expected value of the gain from observing the commandments is enormous, if indeed the probability that we will receive it is very small (as the atheist assumes), then this probabilistic consideration should not convince him to do so. This is indeed the expected value of the lottery, but definitely not the expected gain from it. What determines the value of the ticket is the expected gain and not the expected value of the gain, and these two are not always identical (on the contrary, usually they are not).

Let us now return to Parfit's consideration/calculation. He assumes that the utility of voting in elections is the expected value of the gain as calculated above. But this is a mistake. In this case, the probability that my act will have an effect is so small that although the utility in the event that this occurs is enormous, this is still a probability that should be ignored. Exactly like in the lottery with the coin that I described above. In our present formulation, I would say that the ticket value of participation in elections is not thousands of dollars, and in fact not even a single cent. The expected value of the gain from voting is large (about ten thousand dollars), but the expected gain is negligible because it will never happen. Even if there is such a utility, it has some negligible value, which is significantly lower than the cost of walking there and the time it takes to vote. The conclusion is that Parfit was mistaken: on an act-consequentialist calculation (an individual calculation), there is no obligation to vote in elections.[18]

To conclude this discussion, I will only note that there is an approach in ethics according to which the utility relevant to the moral calculation is only actual utility, and not a statistical expectation that expresses the probability of obtaining utility. According to this approach, of course, Parfit's entire calculation is incorrect, because the actual utility is entirely negligible. But what I have shown here is much stronger: my claim is that in cases of collective actions (which are what he is talking about), all utilitarians, including those who advocate utility determined statistically, should agree to the criterion of actual utility.

Interim summary

So where do we stand? It seems that there is no escape from referring to the utility of the whole and not of the act. True, the feeling (the emotion or moral intuition) is that there is an obligation to vote, but an act-consequentialist consideration (the theory) cannot explain this. Rule consequentialism is not really consequentialism, because as I explained, the individual person sees no consequentialist reason to act according to the rule. Later we will see that rule consequentialism is really covert deontology (or not all that covert).

If so, we now have before us two possibilities: either to reject the utilitarian approach (animal soul), or to reject the feeling in the face of the utilitarian theory (divine soul). Parfit's mistaken calculation is the result of his commitment to both horns of this dilemma. Because of his predicament, that is, the split he feels between the divine soul (to adopt the theory and not the specific feelings) and the animal soul (to adopt the feelings and give up, or refine, the theory), he tries to square the circle. But as we have seen here, not surprisingly, he does not really succeed. The conclusion, as I also showed in column 122, is that there is no way to ground moral obligations on consequentialist considerations. The categorical imperative is the only way to ground them, and surprisingly it is also the only one that yields results (despite not being consequentialist, and as I showed there, precisely because of that).

In the next column we will move on to Kagan's article.

[1] See in the first part of the fourth booklet my distinction between a "philosophical" argument (which assumes premises and derives conclusions from them) and a "theological" argument (which assumes conclusions and derives premises from them).

[2] This of course relates to the question whether morality resides in the head (the intellect) or in the heart (emotion), and my view, as is known, is that it is entirely in the head. See on this in column 86 and elsewhere.

[3] CLARENDON PRESS ·  OXFORD 1984. See there from p. 65 of the file (p. 67 in the book) onward.

[4] Philosophy & Public Affairs, Vol. 39, No. 2 (SPRING 2011), pp. 105-141.

[5] Again, thanks to Noam Oren, who sent me these two sources as well as a paper he wrote on the subject with additional sources.

[6] The relationship between these two is not simple. Utilitarianism is one kind of consequentialism (as are egoism, hedonism, and others).

[7] Though these are somewhat different cases, because even a single person achieves a result regarding individual chickens that will not suffer. Here the result is not only collective, although of course a macroscopic effect can be achieved only through the actions of a large collective.

[8] Again, thanks to Noam Oren for this conceptual clarification.

[9] At the beginning of the third chapter, Parfit argues that the Kantian doctrine too will not withstand this test. If every individual is supposed to reach the conclusion that his acts are of no benefit and he may do as he likes, then the conclusion is that everyone ought to decide that way. That is, this too is supposed to be the general law, and therefore even according to Kant there is no moral obligation. His claim is that Kant's deontological doctrine will not help us here either. In my opinion he is mistaken, for this is precisely the character of these cases: the overall utility is significant even though each act separately is devoid of consequences. I will return to this later.

[10] Think of a situation in which the two actions are performed by the same person: he both poisoned Levi and killed him immediately a few minutes later. This is very similar to the case of Hershele. Surprisingly, in the commentators (Bava Kamma 17b, in the discussion that will be cited momentarily) there is a discussion of exactly such a case, regarding someone who threw a vessel from the roof and he himself ran down and smashed it a moment before it hit the ground. A similar example arises in the commentators regarding someone who sets a dog upon another man's cow. If Reuven sets Shimon's dog upon Levi's cow, there is an opinion that both are exempt (Reuven, because the dog is not his, and Shimon, because he did not set it on. He is under compulsion). Some of the later authorities (Acharonim) wrote that this is the law even if Reuven set his own dog upon Levi's cow. In general, the law in such situations is not trivial, and the consideration is exactly the one we presented above. I explained the logic of this halakhic approach in two articles. See, for example, here.

[11] Parfit, consistently with his approach, now asks us to consider a third case: Reuven gives Levi poison that will cause him a painful death within a few minutes. Shimon realizes that he can save Judah's life if he kills Levi by an immediate death without suffering. His discussion is whether here Shimon's act may be permitted, on the assumption that in the previous cases his act was forbidden. Here there is an additional benefit (for Judah) that can justify the act. But according to my position above, this discussion is unnecessary. Even in the previous cases (when there was no benefit for Judah), there is no reason to prohibit Shimon's act on consequentialist grounds, and therefore there is no need to say that here his act is justified (at least within the consequentialist picture, and perhaps altogether).

[12] Voting in elections has a different character, because there an individual voter has no effect at all (except with negligible probability). In tax evasion and air pollution there is a tiny effect, and when one sums all the players together it accumulates into a significant result.

[13] See on this in column 122 and in column 210.

[14] As I understand it, the denominator should be much smaller, because it ought to be roughly half the number of actual voters among the adult citizens entitled to vote. And even that is only if the whole set of possible results is uniformly distributed (that is, if the probability that A receives one vote or ten votes is equal to the probability that he receives one hundred million. That is not plausible).

[15] It seems to me that one should say Saint Petersburg. This paradox is mentioned in column 20 and column 210.

[16] Personally, I do not understand how one can arrive at such a conclusion, but that is not our subject here.

[17] In a state lottery or Lotto one pays to participate in drawings of this sort, but the amount a ticket costs is a few shekels. Such a sum is reasonable to pay even for the St. Petersburg lottery. See in column 20 the discussion of utility functions.

[18] This is somewhat different from the question of air pollution, because there each of us pollutes a little and the accumulation of all of us is significant. There we are not dealing with a probability of influence but with an accumulation of influences. That is what Parfit's later discussion there deals with (imperceptible harms), and Kagan too, whose remarks will be discussed in the next column, mainly emphasizes that difference. See my remarks there.

השאר תגובה

Back to top button