On Rationality (Column 20)
With God's help
Today (Tuesday) I received by email a link to a short lecture (about 12 minutes) by Prof. Dan Ariely, a lecturer and a very popular, well-known researcher in behavioral economics (a combination of psychology and economics) at MIT and Duke University. He has written several bestsellers that have been translated into dozens of languages, in which he deals with thinking errors in general, and in the economic sphere in particular. The subject of the lecture, as he defined it, was problems (=failures) in our long-term thinking. It struck me as puzzling that such a lecture itself contained several substantial thinking errors on the lecturer's part. In the email posted on the site I commented on these points briefly, and here I would like to elaborate and discuss their broader significance.
First Example: The Chocolate Bar
Ariely gives the example of a case in which someone is presented with the choice of receiving half a chocolate bar now or a whole bar a week from now. He says that most people would prefer to get half a bar now (a bird in the hand). By contrast, if they are presented with a similar choice a year from now—that is, to receive half a bar in a year or a whole bar in a year and a week—the overwhelming majority will prefer the option with the whole bar, even though, according to him, it is exactly the same question.
When I heard this, I really did not agree. First, in both cases this seemed to me a decision that could be entirely rational; and in addition, I saw no contradiction between the decisions in the two cases. In the first case, a person is willing to give up half a bar for the sake of immediacy. What is irrational about that? The two choices before him are as follows: 1. To receive half a bar for immediate eating. 2. To receive a whole bar (=an advantage) but wait a week for it (=a disadvantage: severe loss of fluids due to a full week of drooling). I see no principled problem with choosing option 1 as such. The desire to avoid the drooling is probably worth more to those people than the enjoyment of half a bar.
So in this case, taken by itself, there is no problem. But we still have to examine whether there is a contradiction with the decision reached in the second case. In my view, here too the answer is absolutely not. What happens in the second case? There the two possibilities are these: 1. To receive half a bar in a year. 2. To receive a whole bar in a year and a week. One can explain that the suffering caused by drooling is determined by the proportion between the durations (53/52 when measured in weeks, assuming the rate of drooling is constant throughout the whole period and does not depend on its length) and not by the difference between them (=one week). One can of course also say that when the delay is so great we do not drool at all, but simply forget about the matter until the actual time of eating arrives. On that assumption there is no difference at all between the options. These explanations depend, of course, on questions of measuring pleasures (this is a problem in psychophysics. See it in the fascinating book by Daniel Algom), which is not really associated with our intellectual and rational side. But rational deliberation in such cases is supposed to take our pleasures into account, and these—what can one do—are determined by drooling and its considerable effect on our lives.
If you accept these explanations, or many other possible ones, then the gap in enjoyment between the two possibilities in the second situation is no longer necessarily equal to the gap in enjoyment between the two possibilities in the first case. On the contrary, the assumption that identifies these two gaps is perhaps possible, but certainly not necessary, and in fact it seems to me improbable. When I think about myself, it is a truly absurd comparison. So I really do not understand on what basis Ariely assumes it to be self-evident, and therefore sees every other decision as a thinking error.
The Other Examples
Ariely gives several additional examples there of failures in long-term thinking. Among other things, he mentions sending a text message while driving, or not exercising enough, or eating too much. All of these, he claims, are irrational decisions, since we ignore the long-term consequences of our actions in favor of short-term benefit. Is that unimportant text message, or eating something tasty, or laziness about exercising, worth risking one's life? Ariely assumes that if we prefer them, then we are necessarily irrational.
Again I came to the conclusion that I disagree. Here too, what we have is a not very great risk to life (despite the propaganda of the Ministry of Transport and the Or Yarok association) versus immediate enjoyment. Apparently immediacy is worth to us a small risk to life. What is at stake here is not life versus laziness, or life versus a text message, but a small chance of great harm (loss of life) versus a small but immediate and certain pleasure. From where does Ariely get the confidence that preferring the risk with the small benefit over the small chance of great harm is irrational? To compare the two, one must multiply the harm of loss of life by the probability that it will happen, and only then compare. In any case, this is certainly not a simple or necessary calculation, and I see no irrationality in a person who reaches a different decision.
Needless to say, if that anticipated accident does in fact happen to the person, he will regret it very much, but that is true of every risk we take. A mountain climber too, at the moment he has an accident, will regret having done it. Does that mean that the activity is irrational? According to this line of thinking, even ordinary driving is an irrational step, since even driving without sending text messages is dangerous to life to some degree. Here too there is a choice between life and convenience (arriving quickly and without effort at our destination—especially since that very trip is usually not really necessary for us).
Utility Function
To define these considerations mathematically, mathematicians customarily speak in terms of what they call a utility function. Each person determines for himself how utility is calculated from his standpoint, and on that basis he makes his decisions. Essentially, he has to decide how to quantify different pleasures (how much they are worth to him), and whether pleasure is at all a relevant utility, and in light of that make his decisions. Consider, for example, a person who buys a lottery ticket for 20 NIS, when the drawing is for a million shekels and the chance of winning is 1/100,000. His expected return is negative, since the prize amount multiplied by the chance of winning it is 10 NIS, while the price of the ticket is 20 NIS. Is such a person irrational? Not necessarily. The very twenty-first-century hope of becoming a millionaire, even if it lasts only two days, is worth to him slightly more than 10 NIS, and therefore it is worthwhile for him. Moreover, if he is a wealthy person such that spending 10 NIS is not problematic for him, whereas a gain of a million shekels is certainly significant for him, then again there is nothing defective in the rationality of this decision. That is how he defines his utility function, and he is entitled to define it as he wishes. If it really does give him pleasure at such-and-such a level, then he is entirely right in his decision.
What Is Rationality?
Implicit here is the assumption that determining the utility function is not subject at all to judgment in terms of rationality or irrationality. A person chooses his utilities as he wishes, and there is no right and wrong here. Rationality can judge at most his behavior in terms of the utilities he set for himself—that is, whether he is acting correctly so as to maximize the utility as he defines it.
The same can be argued in the moral context. Relativists claim that one cannot judge a person's values, but only whether he acts in a reasonable and coherent way in order to realize them. And likewise on the logical plane. When a person builds an argument and derives a conclusion from it, he always begins with certain premises, and the conclusion is derived from them by logical inference. Adopting the premises is something very personal, and therefore many would say that it is not subject to judgment (the premises of an argument have no proof; that is why they are premises). If so, what can be judged is only the logic of the inference. The conclusion is that rationality is determined by the quality of a person's inferences, not by his premises.
Something a bit similar can be found in the responsa Igrot Moshe (Even HaEzer, part I, no. 120), where there is a halakhic discussion of how to diagnose a legally incompetent person. Rabbi Moshe Feinstein writes there as follows:
Now, aside from this leniency, one may also argue that he should not be regarded as legally insane at all merely because he considers himself the messiah—just as one who worships idolatry of wood and stone is not considered insane, even though it is certainly a great folly to believe in wood and stone. Rather, we say that he is mentally competent but wicked, and he is liable to the death penalty. So too, one who considers himself the messiah, although this is a great folly, should not be considered insane; rather, his excessive arrogance has misled him into thinking that he is fit to be the messiah. Consequently, one can argue even further that all of his foolish actions, which stem from his mistaken belief that he is the messiah and that, in his corrupt view, they amount to repairing the world, do not render him legally insane. For anything a person does on the basis of some system of thought or path that he is convinced of—even if it is great stupidity—does not make him legally insane in this respect, as is evident from idol worshipers, who performed many foolish acts, and from all the Amorite practices mentioned in Shabbat 67, which are acts of folly, yet their practitioners have the legal status of mentally competent persons.
His claim is that if a person is consistent with his premises, he is not legally incompetent, no matter what those premises are and how detached from reality they may be. If he thinks himself Napoleon or the Messiah, as long as he acts consistently and reasonably on the basis of these premises, he is not considered in Jewish law to be legally incompetent.
Tautology
One might seemingly argue even more than this. The very fact that a person decided to pay 20 NIS for the ticket proves that, for him, the hope of becoming a millionaire is worth more than 10 NIS. If so, by definition he is a rational person, since if he spent money on it then clearly it was worth it to him. The same applies to the chocolate decisions and all the other examples. According to this, Ariely's claim is not only unnecessary, but more than that: it is untenable—meaning that it is necessarily incorrect.
The problem with this argument is that it actually empties judgments about rationality of all content. Even before we have heard what that person did, the conclusion is already clear: the fellow is rational par excellence, for if he was willing to pay the price then clearly that is his utility function. And if he decided according to his utility function, then by definition he is a rational person. But if this conclusion is determined a priori, that is, without any connection to the content and character of the decision, then there is no irrational person in the world.
Ariely implicitly assumes that irrational behavior exists in the world, and now proceeds to characterize and identify it. By contrast, the critical argument I presented here assumes that there is no irrationality in the world, and therefore of course rejects Ariely's claim—but that critique simply begs the question. It is important to understand that the critique I presented above is different. I agree with Ariely that irrationality exists in the world, meaning that human beings can in principle act contrary to their utility function (by mistake, because of thinking errors), and therefore I cannot determine a priori that every person is rational. Nor do I claim that all the decision-makers he described are rational; only that he has no evidence whatsoever that they are not rational. Ariely is criticizing people's decisions here, and therefore he is the one who must prove his claim. The burden of proof is on the claimant, not on the defendant, and I argue that Ariely has not met it.
The conclusion from what I have said is that in many cases these analyses are problematic because they assume incorrect assumptions about people's considerations. What this means is that, in principle, it is very difficult to show that people are not rational, because we can always argue that non-objective differences (psychological rather than material or monetary) also matter to them and enter into their utility function, and therefore their decision was rational after all (that is, it was made in accordance with their utility function).
So Who Is an Irrational Person, Really?
What really emerges from what I have said is that an irrational person is someone who acts contrary to his own utility function (and not someone who fails to act according to the critic's utility function, as Ariely for some reason assumes). Is such a thing possible at all? After all, if he paid 20 NIS for the ticket, apparently it was worth it to him. If he sent the text message, apparently that too was worth it to him. How is an irrational person even theoretically possible? Can a person fail to act according to his own considerations? And if so, why indeed does he do so?
Moreover, even if we explicitly ask the person what he prefers, and he says that he prefers the money, or life, or the whole chocolate bar over the other option, it is still possible to say that even if he chooses the opposite he is rational. Why? Because it may be that he prefers the option of the whole chocolate bar only when the matter is placed before his eyes as a choice between two possibilities. Otherwise he suppresses that preference, and consequently his pleasures are measured differently, and his gains and losses are reckoned on that basis. The claim presented here is that by definition, if a person did something, it apparently was worth it to him—but of course that is already a much more far-reaching claim.
The other possibility is that the person is mistaken in his judgment. He measures his pleasures incorrectly, and therefore his decision does not really match his true utility function. But it is important to understand that this is by no means a simple claim, for even if he is mistaken, this is still the pleasure that stands before him now for decision; and if so, he is entirely right to take that into account and not some other pleasure that will be caused to him after the decision. What determines his decisions is his assessment now of the future pleasure, not the future pleasure itself (here I am not speaking about the fact that one should take into account the chance that it will materialize. That is another point).[1]
In Jewish law there is a rule that if a person is forced by threats to sell an object (for its full value), the sale is valid.[2] The seller cannot later claim that the object was sold under threats and without genuine consent, and ask for the transaction to be rescinded. The usual explanation of this strange rule is that if the person chose to sell the object in order to escape the threat, then escaping the threat is itself the additional value he received (beyond the money), and it is that which convinced him to sell. The money alone was not worth the object to him, and therefore from the outset he did not want to sell, but the money plus escape from the threat were worth the sale to him. This is a very problematic argument and requires further discussion, but here I only wanted to illustrate one implication of the mode of thinking I described above.
Postmodernism and Relativism
What underlies these arguments, in fact, is a highly relative conception of truth, both economic and evaluative. In effect, a person can choose his values as he wishes, and we have no way to criticize him for that. The only thing that can be judged from the outside is whether he remains consistent with his choices—that is, whether he acts optimally in order to realize his values and achieve his goals.
Interim Summary
The conclusion is that, in principle, a situation is possible in which a person acts irrationally, but it seems that in no concrete case can we infer that this is indeed the case. The reason is that the utility function is not subject to judgment in terms of rationality, just as basic premises are not subject to judgment in such terms.
Although this is not my subject here, I must say that I do not agree with the relativistic conception according to which nothing can be judged beyond logical consistency. In my view, one can judge claims and basic premises, and not only arguments (that is, inferences). Moreover, to my own discredit, I also think that someone who believes he is Napoleon, even though his name is in fact Moshe Zuchmir and he is definitely not from Corsica, most certainly suffers from a mental disorder, and there is ample basis to regard him in Jewish law as legally incompetent and not responsible for his actions. But with regard to utility functions, the situation is, in my view, different. There I do tend to adopt the relative conception, for who am I to determine what a person enjoys, and how much?!
Examples from Another Lecture
There is another lecture by Dan Ariely, a link to which I received today, this time as part of TED. He gives there somewhat better examples of irrational decisions. For example, he compares countries that are ostensibly similar in their culture and mentality (such as the Netherlands and Belgium, or Sweden and Denmark) and shows that there are dramatic differences between them in the population's willingness to donate organs. In one of them, willingness to donate is nearly 100% of the public, and in another it is around 10%. This is a completely clear difference that cries out for explanation, especially given the cultural and mental similarity between them. It turns out that the explanation is utterly banal: in the altruistic countries, that is, those in which willingness to donate is about 100%, the form at the transport office on which you sign this willingness is worded in such a way that you must fill it out if you do not wish to donate. By contrast, in the egoistic countries the form is worded so that you must fill it out if you do wish to donate. This is a choice between the same two options, and the way the options are presented changes people's choices. This is seemingly irrationality in the full sense. Here it is already hard to explain it in terms of a utility function and people's personal choices. He gives other examples there of this effect, and Daniel Kahneman of course gives many, many more (that is what he built almost his entire career on, up to the Nobel Prize).
Ariely explains this difference in a very interesting way. He argues that because this is a difficult and significant dilemma, people have no sufficiently compelling way to make a decision about it. Therefore they choose not to decide, but rather to leave the default state in place. If they have no good reason to act—they do not act. This is parallel to the rule known in Jewish law as Passive nonaction is preferable. (passive omission is preferable).
But now the question arises: is this not a rational decision? If a person has no good way to decide, he is willing to let the state decide in his place and accept its decision. What is wrong with that? The principle of Passive nonaction is preferable. is very sensible. To see this, think about the following version of the famous trolley problem: a train is traveling on a track, and farther ahead on the continuation of the track lies a sleeping worker. If it continues straight it will run him over. The train is now at a switching point, and I can pull a lever that will divert it to another track, but farther along there too another worker is sleeping, and there too if it continues it will run him over. What should I do in such a situation? Presumably most of us would choose to do nothing. Why? Because the assumption is that a person needs a reason in order to act. If the two options are identical, then he has no such reason, and therefore he does not act. That seems to me entirely rational.[3]
On Irrationality and Calculation Errors
The examples presented later in the lecture are better. There irrational behavior is indeed demonstrated, very much in the vein of Daniel Kahneman. Human beings make different decisions when the only thing that changes is the way the dilemma is presented (and not a case of Passive nonaction is preferable. as above).
The conclusion is that, surprisingly enough, there really is such a thing as irrational people. To tell the truth, it seems to me that there are quite a few such people. But it seems to me that in most cases the irrationality under discussion there is simply a calculation error. In such cases people are irrational in the same sense that a person who makes a mistake in arithmetic at the grocery store is irrational. It is simply a mistake, and therefore I am not sure that this is what we would call irrationality in an essential sense. Is a person who fails to solve a complicated equation irrational? At most, he is not especially gifted mathematically. Perfectly legitimate, is it not?[4] Here I shall stop. I recommend that readers watch the rest of the lecture and think about the issues. Somehow it seems to me that irrationality is depicted here as rather elusive, not very rational itself…
[1] This reminds me of a letter by Rabbi Shach, from which it emerges that he opposed the Entebbe operation on the grounds that it was very dangerous, and that the chance that soldiers and hostages would be harmed was high relative to the chance of success. After the operation succeeded, people came to the rabbi and claimed that he had been mistaken. The fact is that it succeeded. The rabbi writes in the letter that such reasoning is mistaken, since the decision was made on the basis of the data that were known before the operation. Even then there was only a small chance that the operation would succeed. Therefore the fact that it actually succeeded does not prove that the decision that was made was correct at the time. Incidentally, I do not entirely agree with this claim, but I will not go into that here. I brought it only to illustrate the difference between the time of decision and the results that are actually obtained afterward.
[2] "If they forced him and he sold, the sale is valid.". See Bava Batra 47b–48b, and Maimonides, Hilkhot Mekhira 10:1, and Shulhan Arukh, Hoshen Mishpat 205:1.
[3] One can also argue in favor of conducting a lottery over whether to divert it or not. The comparison between the option of passive nonaction and the lottery option is very interesting, but I will not go into it here.
[4] There is a small difference, however, for in the cases Ariely brings one can rather easily make people see their mistake, whereas in the case of equations this usually requires long study, and sometimes, when there is insufficient talent, it is truly impossible. And what about an equation that nobody knows how to solve, or that there is no way to solve at all (such as finding a root of a fifth-degree polynomial by radicals)? Are we all irrational with respect to that? This identification does not seem reasonable.