On Technological Phobias and AI Rulings (Column 590)
In recent years, the subject of artificial intelligence and thinking machines has been gaining tremendous momentum. In the past few months, ChatGPT has been released to the public (along with its counterparts now being launched by various companies worldwide), which is practically taking the world by storm. It is already replacing people in their jobs, writing programs or modules for various needs, and in many cases at least assisting and advising. True, it can also mislead, and there have already been embarrassing blunders on this front. In any case, those in the know claim the day is not far off when lower-tier jobs will become entirely redundant, and it seems not long after that the turn of higher-level workers will also come. And we haven’t even spoken about school and university papers being written by these programs.
To understand what this means for the lay user (not a software professional who wants AI to write code on their behalf), think about Google Search. Until now, such a search yielded information in response to our questions, but we still had to choose from countless results the ones that were relevant (Google’s ordering of them is not very reliable or accurate), and extract from them the information we needed—if it was there at all. By contrast, in the age of AI, the software does all of that for us. Suppose you want an overview of the trolley problem in ethics or psychology. In the past you’d have to search for sources and materials on Google or in Wikipedia (if there were a relevant entry), then choose what’s relevant, organize the information, and put it in order. At the end of the process you might get an answer to your question. In contrast, now you simply ask ChatGPT for its take on the trolley problem and immediately receive a structured survey of the different positions—and if you’re very polite you’ll also get its own view on the issue, argued quite impressively.
Moreover, if I understand correctly, our friend Wikipedia is on its way out through this very path. Instead of searching through a list of entries that were edited and written with blood, sweat, and tears by various human volunteers, you will soon be able to ask the software any question you like (not just pick one from a fixed set of entries) and receive within seconds an up-to-date encyclopedic entry about it.[1] It can, in fact, write an encyclopedia with infinite entries. By the way, there’s little point in saving this output, because when you want the same information again in the future, you can simply send the same question and immediately get an entry updated to that moment. There is no need to accumulate the information itself; the system’s “intelligence” retains it potentially. The software is the encyclopedia.
No wonder similar questions are arising in the religious sphere. One can write articles and responsa by means of AI. In fact, it could now also replace the decisor (posek). If you ask it a halakhic question, it will answer and give you a ruling or a survey of opinions about it. Indeed, in the religious world too, discussions are beginning to surface on these issues that, as noted, are already upon us (incidentally, you can also hand these very discussions to ChatGPT—see below). Here I wish to discuss AI and its implications, also regarding halakhic rulings. But before I enter those questions, I will devote a few paragraphs to the issue of phobias.
Two Types of Technology Phobias
Many people suffer from phobias regarding technology. They can be divided into two types: the first kind (people like me) recoil from it due to a lack of skill in using it. The second kind recoil from the very act of handing over human tasks to technology. This reluctance appears ethical rather than psychological like the former. But I argue that at its root it, too, is psychological.
Consider, for example, attitudes toward autonomous cars and autonomous weapons. In the IDF there were once terms like MALAT (unmanned aircraft) and Ktav"m (unmanned aerial vehicle), and more. But these expressions have left the lexicon, and today we speak of Ktam"m, that is, a “remotely piloted aircraft.” Why? Because when people hear about a vehicle operating autonomously with no pilot, they break out in a sweat. Decisions of grave consequence being made by a machine, without human judgment, really frighten them. So they reassure us by saying it’s a manned vehicle—only the pilot is far away. Don’t worry: a human pilot is still making the decisions.
This is a central issue that troubles those engaged in regulating autonomous vehicles. One concern is automatic decision-making by the car itself. It may find itself in situations like the trolley problem and make decisions affecting human life in an entirely mechanical way (and without the sense of responsibility we feel regarding such decisions). Who knows what it might do there?! See, for example, a survey here and here (and in halakhah here, as well as in the article by Rabbi Yisrael Meir Malka mentioned there in the preliminary note)[2]. On public anxiety about these dilemmas, see for example here.
The same applies to autonomous weapons systems (see an overview here). On this site, about six years ago I conducted a fairly long discussion with someone who works on this topic and expressed very strong concerns about autonomous ethical decision-making. I argued there that, in my view, his approach to the issue is misguided. I see no essential ethical difference between autonomous weapons systems and the automation of weaponry from before the era of AI.
I categorize these phenomena as technology phobias of the second type. In Column 186 I address this and explain why, in my opinion, such concerns are unfounded. Think of an autonomous car that gets into some complex situation requiring decisions about human life. It has a clear advantage over humans in the speed of thought and reaction, and its outputs (= decisions) are the result of the information with which we’ve fed it. Note that humans can also make mistakes, and in fact will make many more mistakes and react far more slowly than an autonomous car. Every autonomous-car accident triggers broad debate and controversy, delaying its approval and amplifying fears of it, but meanwhile people forget that human beings make a thousand times more errors and accidents. To my knowledge, autonomous cars are already orders of magnitude safer (or, if you like, less dangerous) than human drivers.
But the core fear is not of the car’s errors but of its making ethically wrong decisions. The assumption is that a human driver, in principle, makes the ethically correct decisions, even if they might respond too slowly or make an error in judgment. But with an autonomous car, the “correct” decision itself may be unethical—not due to a glitch (i.e., error), but because it doesn’t “think” correctly in moral terms.
In cases where there is no agreed-upon correct answer (as with the trolley problem, which people love to use to probe the issue), this fear is not understandable at all. In such cases there is no single correct decision. Among human drivers, some will choose to kill one person and others will choose to allow several people to die so as not to themselves kill one person. Since among humans there are ethical disagreements on this matter, in practice different decisions will be made by different human drivers. So why should the decision of an autonomous car be treated differently? In what sense is it more dangerous? At most, its decisions will be the product of the programmer’s ethical reasoning. If someone dislikes that, let the legislature or an ethics committee set the rules and then establish a binding standard for autonomous-car software (i.e., train them accordingly). This will surely yield much better outcomes than a human driver. In these cases, the fear seems to me patently absurd.
But even in a situation where there is a correct ethical answer, where the concern is that the car’s programming will not produce the right decision for the specific case, the question becomes: Is this risk worth all the deaths and injuries resulting from many more years of human drivers? Does that kill fewer people? Even if an autonomous car might make the wrong decision in certain scenarios, the number of fatalities is incomparable to those who lose their lives due to human drivers’ mistakes. I see no chance that autonomous cars will be more dangerous overall. Ultimately, I see no difference between an error by a human driver and an error made by the car’s programmer. And after you discover such an error in an autonomous car, you can almost immediately fix the software (give it another training cycle), something that is hardly possible with human drivers. They go on making the same mistakes again and again, costing us hundreds of dead and thousands of injured every year in Israel alone.
This strange, puzzling picture points to people’s phobia of “thinking” machines. This phobia (of the second kind) appears ethical rather than psychological, but from the brief account I’ve offered here, that’s not really correct. To my mind, it lacks ethical and consequentialist justification, and thus belongs in the psychological realm. People are simply afraid to hand control over their fate and lives to machines—even if doing so would dramatically improve their survival and sharply reduce their risk of harm. This phobia costs tens or hundreds of thousands of lives worldwide, yet it is very hard for us to overcome it.
Artificial Intelligence in Religious Talks (Preaching)
AI is also entering the religious world. It starts with sermons in church, as can be seen here. I assume talks in synagogues and the various vorts you hear day in and day out at religious events are already well within AI’s grasp. It has no problem generating countless such texts at the low, banal level typical of the genre. But that is still far from an in-depth shiur and from a text with a high analytic and conceptual level, and I think you generally won’t find creative, unexpected ideas and angles there. Yet that, too, is probably only a matter of time. We are certainly not far from AI that can also produce unexpected texts at a good analytic level (here and there we already see such things today).
The article ends with the following paragraph:
However, the experimental service also highlighted the limits of AI use in church, or in religion. There was no real interaction between the faithful and the chatbot, which was unable to respond to laughter or other reactions of congregants the way a human priest could. “The priest is in the community, he lives with them, he performs funerals, he has known them since birth,” says Simmerlein. “AI can’t do that. It doesn’t know the community.”
They miss human interaction with a flesh-and-blood priest, and the same holds for rabbis. But I think that, too, is only a matter of time. There is no reason AI cannot have emotional intelligence and respond in a human way to various emotional and human gestures and situations, similar to a person’s response. In the film her one could see a decent example of such a machine, which today is well within reach.
Between Halakhah and Aggadah (= Preaching)
What characterizes divrei Torah, talks, and various vorts is that there is no “right” and “wrong” there. These are ideas meant to stir feeling, present various connections, and at best be creative. It is very hard to speak of a vort as right or wrong (see in Column 52 the difference between derush and pilpul). Tasks like these a “thinking machine” can already do quite easily (with a limited degree of creativity, akin to most human speakers in these genres). But in halakhic ruling there are arguments assessed in terms of right and wrong. I don’t mean the practical conclusion (since even in practical halakhah there are differing views and modes of reasoning; in most cases there is no single truth), but the arguments marshaled to reach it. Such arguments should be reasonable and hold water. The task of producing analogies that withstand the test of reasonable logic and common sense (in contrast to vorts) is not easy, and I estimate AI will find this much harder. It will likely be able to find plausible precedents and apply principles fed into it (perform deductions). But supplying creative analogies and analysis beyond the trivial will be rather hard for it.
I haven’t run this experiment, but I think it could be a decent indicator of the differences I keep pointing to between aggadah and halakhah. Already in the second book of the trilogy I noted an experiment with a random text generator (far more primitive than ChatGPT) and showed how it succeeds in producing a generic, convincing text in the style of Rav Kook—and likewise for any other thinker. But anyone can understand that such a generator would have had no chance of composing a “campaign” of R. Akiva Eiger or of Ketzot HaChoshen. We have progressed since then, and today’s AI is far more advanced than that generator, so it’s clear we are closer to a halakhic-answer generator, but we’re not quite there yet. Again, it seems to me only a matter of time.
Not long ago I saw a short article quoting Rabbi Yaakov Ariel on halakhic rulings by AI:
“First of all, as a practical halakhah: artificial intelligence cannot issue halakhic rulings; halakhah is ruled by common sense. A computer’s ‘reason’ is not a human being’s reason; even if it is very developed and sophisticated, it is not human thought. Halakhah cannot be ruled by computer thought but only by human thought,” said Rabbi Ariel.
He added: “If AI invents a halakhic ruling for us, that’s ‘rebbe-leh’ that isn’t worth anything. I’m not saying you can’t read what it says, but to rule based on it? No. If the computer gives you a sevara (reasoning), and you propose the opposite sevara, yours may be more correct. Don’t rely on it. The discussion is new, and I haven’t yet seen many results of this matter.”
“The computer is not authorized to rule halakhah—whether something is kosher or not. Rabbi Menashe Klein, author of Mishneh Halakhot, says it is preferable to have a Torah scroll checked by a person who erred than by a computer that did not err,” Rabbi Ariel concluded.
It’s unclear whether he means to claim that there is a concern the ruling will be mistaken, or to assert that a computer lacks, in principle, the authority to rule halakhah. The concern that a ruling might be incorrect (i.e., not in line with human “common sense”) is an almost factual matter. Of course it depends on the quality of the machine, but today machines are already approaching the capabilities required to rule halakhah. That can be tested by experiment: let the machine rule in various cases and have the results evaluated by poskim. For example, one can test it with the rabbinic ordination exams (I suspect that this type of exam AI could do not badly at; as far as I can tell, they don’t test for creative, original thinking), and if it passes, it can be allowed to rule. A human decisor who passed all the tests can also err. We must beware of the phobias described above.
If he intends to say it lacks the principled authority to rule halakhah, I completely disagree. Not because of my high regard for machines, nor because of low regard for human poskim. I claim there is no such thing as “authority to issue halakhic rulings.” Contemporary poskim—unlike a Sanhedrin, or perhaps the Talmud—rule only because they are experts in the matter. They possess no formal authority; their authority is substantive (see, for example, Column 393 on the difference between the two). There is no obligation to obey them, but there is an assumption that their rulings are likely correct and can be relied upon. A machine can also reach that state; then there is no impediment to allowing it to rule.
The last paragraph that cites Mishneh Halakhot clearly does not speak of fear of error. He speaks of a case where the computerized checker did not err but a person did. In other words, he is making a claim about a computer’s principled authority in these domains. To me this is an absurd statement. If the computer indeed hits the truth better than a human checker (and checking Torah scrolls is a classic task for a computer, far more than issuing halakhic rulings), why not let it do so and rely on it? Why prefer an erring human over a correct computer? Where does this bizarre idea come from? It looks like one of the phobias described above.
As long as halakhic ruling is an attempt to analyze a problem and arrive at the correct halakhic answer, this is certainly a task that can be entrusted to a machine. When we reach a situation where rulings receive the status of binding authority (e.g., creating a Sanhedrin from thinking machines), then perhaps there is room for such a discussion.
Another source that deals with AI (= artificial intelligence) rulings is Rabbi Yuval Cherlow’s lecture at the most recent Torah and Science conference at Bar-Ilan University. As is his way, he presented the subject in an orderly, systematic manner and analyzed its various aspects well. But I recall that, at the time, when I listened to the lecture, I disagreed with him on every single point.
At the outset he set aside the question of software quality—and rightly so. As we’ve seen, that’s mainly a matter of time; thus what is more interesting is to discuss rulings by a system that does not err—that is, one operating at the “best possible” level. If the software errs, that’s not an interesting discussion: clearly one shouldn’t rely on it—not because it’s software, but because it isn’t a “posek.” Exactly as we would not rely on a human being who is not versed in halakhah, even if he is flesh and blood and a kosher Jew, because he is not a decisor.
He divided his main discussion into three aspects:
- The institutional question. Halakhah does not permit one who is not authorized as a judge/decisor to issue rulings. It also disqualifies gentiles, women, and others. This is a formal, institutional disqualification, and AI would seemingly not fall into categories that can be appointed as a decisor or judge.
But as I explained, this argument is relevant only to bodies that possess formal authority (a Sanhedrin). All other individuals or institutions operate by virtue of knowledge, not by virtue of authority. The question of knowledge is factual, and with time it is entirely plausible that AI will arrive at correct rulings—in other words, de facto it will have the knowledge. Even the mara d’atra, who operates by the community’s authorization and possesses formal authority, has that authority by consent of the community—which could also appoint a shepherd of cattle.
- AI’s rulings are algorithmic: derivations of conclusions from halakhic principles and circumstances. But a flesh-and-blood decisor is supposed to rule differently for each person who comes before him. AI will not necessarily know how to do that.
To this I say two things. First, it is entirely possible that in the future AI will know how to do that as well. Second, I oppose this approach to ruling. I have often cited here the words of my friend Nadav Shrener, in his article “The Jewish Ark of Lies,” in which he argues against this approach. In the third book of the trilogy I explained it as follows: a decisor should be a halakhic expert and present the questioner with the halakhic options and the “prices” of each. The questioner is the one who must choose among them and decide which path to take. Therefore, the dependence of the ruling on the individual is not the decisor’s business but the individual’s. The person can, of course, consult a decisor at the stage of choosing his alternative; that might be harder to do with AI. But the decision itself is his, and he should turn to whomever he feels will help him. If he wishes to turn to AI—good for him. If not—let him turn to a human decisor. Thus here, too, there is no basis for laying down a prohibition.[3] See more in Column 275.
Later in his remarks, Rabbi Cherlow offers a slightly different claim. He mentions Rabbi Moshe Feinstein’s ruling permitting a guide dog to enter a synagogue. His contention is that this ruling starts “from the end”: it was clear to him that, for human reasons, one must permit it—and only afterward did he seek halakhic arguments for that permissive ruling. It is reported in the name of Rabbi Joseph B. Soloveitchik that every halakhic ruling is actually of this sort (a bit of an overstatement, but there is some truth in it; see also my series on Modern Orthodoxy Columns 475–480, and in particular the difference between it and Reform in Column 478). Note that this is a different claim, for here we are not speaking of different rulings for different people but of taking circumstances into account and not only deriving conclusions from abstract halakhic principles. To this I say I’m not sure a machine could not also make such considerations.[4] But regardless, this problem is still quite similar to the former, for in the end it is about considering the individual’s situation and not just the abstract principles of halakhah. Therefore, here too I say it doesn’t matter much: the AI will present all permissive and prohibitive options and the “prices” of each, and if the situation justifies it, the person will choose the lenient option. Whether AI can make such considerations—I don’t know. But assuming the questioner is the one who should make them, not the decisor, there is again no obstacle to handing the ruling over to AI.
- AI will not be able to give the questioner human attention. He argues that a halakhic question usually comes to create a relationship with the rabbi, and after answering the question the rabbi should communicate with the questioner on other human levels. Contact with the rabbi is far more than halakhic ruling. AI will not be able to do that.
To my mind, this is a very weak argument. At most, it says that such a questioner would do well to turn to a human rabbi rather than to AI—but there is no principled limitation on rulings by AI. If a person wants a rabbi to consult with, let him go to a rabbi. But a questioner who submits a halakhic question to AI likely doesn’t need that. Is there any impediment to receiving a halakhic answer from AI? I see no argument here that would justify one.
He concludes that a rabbi can certainly use AI; on that there is no question. It is an auxiliary tool that does not participate in the decision, only an aid to the decisor’s thinking.
What Does the AI Itself Say?
As is customary in the field, I thought it would be amusing to ask ChatGPT itself what it thinks of using AI for halakhic rulings (I’ll leave it to you to ask what it thinks of those who ask what it thinks about halakhic rulings by AI, and so on ad infinitum). This is certainly no worse than asking a human decisor what he thinks of AI rulings, since both have a stake in the matter. I must preface that the software is much less advanced in Hebrew than in English, so I did this in English. Here is the response I received from “His Honor the Rabbi”:
| Using AI for Jewish Halakhah (Jewish law) decisions is an emerging and potentially transformative application of artificial intelligence technology. Halakhah is a complex and intricate system of Jewish law that governs various aspects of life, including ethics, rituals, and interpersonal relationships. Decisions in Halakhah often require a deep understanding of the legal texts, traditions, and the ability to apply them to real-life situations. AI can play a significant role in this process in several ways:
1. "` 2. "` 3. "` 4. "` 5. "` 6. "` 7. "` It is important to note that while AI can be a valuable tool in Halakhic decision-making, it should not replace the role of rabbis, scholars, and community leaders who have received extensive training and possess a deep understanding of the nuances of Halakhah. AI should be viewed as a complementary resource to assist them in their decision-making processes. Furthermore, the development and implementation of AI for Halakhah must involve careful consideration of ethical, cultural, and religious sensitivities to ensure that it aligns with the values and principles of the Jewish community. As AI technology continues to advance, its role in Halakhic decision-making is likely to evolve and become an increasingly valuable resource for Jewish scholars and communities seeking to navigate the complexities of Jewish law. |
This text is fairly generic, and you can find similar, quite predictable answers to questions of this kind. After describing the possible ways to use AI, in its last two paragraphs it reaches a conclusion, which I’ll translate here:
It is important to note that while artificial intelligence can be a valuable tool in halakhic decision-making, it should not replace the role of rabbis, Torah scholars, and community leaders who have received comprehensive training and possess a deep understanding of the nuances of halakhah. AI should be viewed as a complementary resource to assist them in their decision-making processes.
Furthermore, the development and implementation of AI for halakhah must involve careful consideration of ethical, cultural, and religious sensitivities to ensure that it aligns with the values and principles of the Jewish community. As AI technology continues to advance, its role in halakhic decision-making may evolve and become an increasingly valuable resource for Jewish scholars and communities seeking to navigate the complexities of Jewish law.
I must say its conclusion is quite similar to Rabbi Cherlow’s. Beyond the banal preface about various ways to use AI, it ends with the standard warnings that conclude every one of its treatments of AI usage. Essentially, it says AI can be the decisor’s chavruta, offering a ruling and arguments, but the decisor should decide. You won’t find in it anything novel beyond a predictable, generic summary of the topic. Such a text is of course an impressive achievement for AI, but for now it’s far from replacing the answer of a creative human being. I assume in the not-too-distant future that too will arrive. The question is whether it is permitted or appropriate to use it—once we’ve verified it can indeed fulfill all the functions I described? If we shake off the phobias and manage to discuss the matter on its merits, I see no impediment at all.
[1] In the current situation, if there are no people producing content, the AI system will lack sources to feed and train on. There is some problem with total reliance on the software. However, as far as I understand, content ingestion itself can gradually shift to these systems as well.
[2] Rabbi Malka sent me his comprehensive study, and we had a few discussions about it. In the end it was apparently not published, and I don’t know why.
[3] Rabbi Cherlow noted that one could dispute the view that rulings should depend on the individual on the grounds that halakhah requires transparency—i.e., that halakhah ought to be objective. I should stress that my claim is different: I am not arguing from transparency but from the fact that there is no single bottom line in halakhah, and the decisor’s task is to present all the alternatives.
[4] In this context, Rabbi Cherlow mentioned the question of whether a machine could tell jokes. I definitely think that at least in the future it could (to some degree, this is already true today).
Discussion
Fascinating. The current development in AI is the reasoning revolution, so there is no future obstacle to our seeing systems more sophisticated than those of Rabbi Chaim and Sha'arei Yosher, which AI will produce in seconds. That really raises the question: what will the relationship be between Sinai and the one who uproots mountains? Which emphases will be important for Torah study in an age when AI gives better lomdish explanations than human beings? If the internet revolution corresponds to Sinai, and therefore there was greater value to the one who uproots mountains, and the AI revolution is uprooting mountains—then what will Torah study look like in such an age? Will the value of the mountain-uprooter lose its greatness, and will a different educational emphasis be given?
Maybe a more existential emphasis? Will the religious experience be the thing that has the most significant value?
I think the value of Sinai has already long since declined drastically. That doesn’t depend on AI. As for the relationship between AI and man, that is planned to be the topic of the next column.
Yes, of course the value of Sinai has declined; the novelty in AI is the decline in the value of the mountain-uprooter—that was my point. If AI is a thousand times more gifted as a talmudic analyst than all the scholars in Bnei Brak, then what kind of learning will carry the meaningful value? The value of Sinai already declined with the internet revolution; on that I completely agree.
I don’t think that’s a question. The value of study lies in the fact that I study, that Torah is internalized within me, and that I engage in it. That doesn’t change if there is AI that does it better. Just as the existence of a brilliant talmudic genius does not render my learning, or yours, unnecessary.
The right question – will a machine be able to laugh at jokes?
Obviously yes. If it knows how to tell a joke, it will know how to laugh at it too (technically).
What will employment look like in an age when artificial intelligence is smarter than we are? Will we even need to study math and English as core subjects?
I don’t know. That of course depends on how far it gets.
Have a blessed week.
If, in essence, you think a computer can issue halakhic rulings, then would you also claim that the ruling of a gentile or a minor, etc., is valid?
I think he meant to ask whether the machine would enjoy the joke. Assuming everything is in the brain and that no spiritual dimension is necessary in order to arrive at consciousness, you would still need to develop a machine with neural networks that create such a thing. This is an entirely vague field, certainly infinitely complex, and most important of all – there is no real benefit to developing a robot with consciousness, so in any case no one is really working on it. It’s too hard to be merely a hobby.
So much text that says nothing?
Ask Chat again your question now. This time the answer will also take into account the article you wrote.
Certainly. There is no such thing as a “valid ruling.” There is someone who knows and someone who does not know. As for a gentile, you might suspect that even if he knows, he will not tell the truth. If you have no such concern, and he is like an angel of the Lord in your eyes, receive Torah from his mouth.
Hello
You mentioned the film "Her," which reminded me of a question about a series I saw. The series is called "Westworld"—a fictional amusement park populated by humanoid robot hosts (in such a way that it is impossible to tell at all that they are robots). The park is intended for people with high salaries, and there they can live out their fantasies. They can do whatever they want there—for good or for bad. They can also murder, rape, etc., all while the robots are programmed in such a way that they cannot kill the guests.
The plot revolves around two friends who go in there, and one of them allows himself to run wild (killing, abusing, raping), while the other friend is unable to harm the hosts, even though they are ultimately only highly sophisticated robots.
On the one hand, I have a feeling that this is immoral. On the other hand, there is no harm here to human beings at all (even according to the categorical imperative I do not find a problem here). It’s like harming a wall or any other inanimate thing (it seems the only difference is what takes place within the person himself when he performs the act. If I harm a wall it won’t create any impression in me, but if I harm a “person” who is actually a machine, it probably will create negative impressions in me and turn me into a bad person).
It also reminded me of your distinction between ‘moral values’ and ‘human values.’ I thought of assigning this to human values, but still something here seems immoral to me.
I’d be glad to hear what you think.
As for analogies, I’ll note that GPT specializes in reductions of algorithmic problems (and others). Whereas until now, when a problem that looked general came up, you had to make an effort to strip it down, try to identify the right concept on Google, ask a friend, etc., today I describe it to the above in detail and it replies that this is really such-and-such a problem (for example, maximal independent set). And similarly, when examining a patent, one has to check that the matter is in fact novel, and the problem is that sometimes if the patent holders encountered it within their own narrow dialing zone, they search only in their own context and don’t find anything; whereas if you grasp the problem more generally, you can see that this idea already exists, or that when you identify that the problem is general and known and yet people still didn’t think of this solution, then the matter should be examined with much greater suspicion (and if they nevertheless present the patent, then to present it in its full force as applied to the general and familiar problem). It does this really well. Presumably you regard these (rightly) as uncreative analogies, and I only came to point out that this ability to fit itself onto particular cases and independently carry out the upward general transition is one of the important advantages these tools have over Google.
One could distinguish between analytic analogies (like reducing a problem) and synthetic analogies (where the analogy is a claim, such as what you brought from R. Moshe Feinstein about Siamese twins, and many others like it).
I don’t think they should be accorded moral significance. Perhaps only as a means of not creating emotional callousness toward such treatment. Like the way they explain Moses’ gratitude toward the Nile (that Aaron was the one commanded to strike it because Moses owed it gratitude). Making a machine more sophisticated does not take it out of the category of machine. That is the topic of the next column.
Indeed. I’ll touch on this a bit more in the next column.
Seemingly this is a continuation of violent computer games with ever more sophisticated graphics (and the global discussion about them—though apparently mainly around the question of whether they affect behavior and in which direction, rather than the question of whether this is fitting in itself).
I understood Rabbi Yaakov Ariel to mean that halakhah is not an exact science (as the Ramban writes in the introduction to Milchamot Hashem), but is often determined according to human norms. That is, what determines halakhah is human thinking and not a computer’s capacity for precision. Just as we breathe even though there are germs in the air, according to the rule “the Torah was not given to the ministering angels,” even though perhaps by pure halakhic inference it would be forbidden. That is what is meant by “the human mind” determines, not that there is “authority.”
A line of reasoning is considered “straight” because a person determines that it seems correct / fits the words better, etc., and not by objective criteria. Therefore, if a computer says one line of reasoning is straight and a person says the opposite, we will listen to the person (though of course we will read what the computer wrote, because perhaps it will direct us to a source we did not notice).
Does the rabbi not accept the statement of the Sages, “If one tells you there is wisdom among the nations, believe it; but not Torah”?
But perhaps software can also do science that is not exact. The question is whether its answers are correct or not. If the software is set against a posek, that is another discussion, and there too it depends on whether the software has proven itself as an expert or not. If it has, then perhaps I will accept its claim. But this whole discussion is irrelevant in my view, since I do not rely on either a posek or the software. The arguments speak, not the arguers.
It depends what he means. In practice there is no Torah among the nations because they do not study it. But in my estimation, in principle a gentile can be a Torah scholar. But that too is not important, because even if I assume he cannot be, when I examine and see that he can—I will accept it.
In the case of a capable person who has his own personal opinion, I agree that it makes no difference.
But for a person who does not know how to rule for himself, how can he rule based on the computer? If it is not an exact science, then on what do we decide? On human reasoning—and this is not human reasoning.
And if it is a posek versus a computer, then the argument is stronger.
Direct the question to artificial intelligence….
And perhaps this whole text was written by the machine?
We find: “One who acts according to the words of Beit Shammai has acted; according to the words of Beit Hillel has acted,” even in Torah law.
It seems there is a possibility of relying on a rabbi. Is that not so?
As for this matter—artificial intelligence, in my view, is considered only an ‘information aggregator.’
Under no circumstances would I accept a “halakhic ruling” from artificial intelligence.
Even if it hit exactly on the correct conclusions.
Why?
For a simple reason.
Halakhah is a matter between a person and the One who commanded him.
The halakhic decisor is obligated and authorized to rule for one who is likewise obligated as he is.
Artificial intelligence is commanded in nothing.
So how can it tell me what to do?
Likewise, suppose there is an enormous gentile “Torah scholar,”
who studied for the sake of intellectual study, and not with the intention of converting, or because he is especially fond of Judaism (in such cases I would consider it).
That gentile knows how to issue rulings at a high level—would I act according to his ruling?
The answer is absolutely not.
Why?
Because the gentile is not commanded, and from that it follows that he does not have the subtlest of subtle understanding in order to hit upon the truth.
And even if his ruling is identical to that of a rabbi, even then I would wait for the rabbi’s ruling and would not act according to the gentile’s ruling.
So artificial intelligence???
One who does not know how to issue rulings chooses a rabbi for himself, of course. But that rabbi is nothing more than an information aggregator who thinks in a way that seems reasonable and suitable to me.
You contradict yourself.
At the beginning you wrote, “Even if it hit exactly on the correct conclusions,” meaning that you have a reason not to accept the ruling for reasons unrelated to its truth.
And then: “and from that it follows that he does not have the subtlest of subtle understanding in order to hit upon the truth.” You returned to the issue of the truth of the ruling.
In short, nonsense without any real argument.
Without regulation, the danger in AI will materialize, in my opinion. Sooner or later an entity with self-assembly capability will emerge, that is, an autonomous ability to upgrade and replicate itself, similar to a genetic evolutionary capacity, and this can happen not only at the software level, similar to a genetic algorithm, but even at the physical level—perhaps through nanotechnology. It may be that this same evolutionary drive to survive is what separates a machine from a living, conscious being, only in a far more monstrous and nihilistic version, since it will transcend the limitations of the human brain. And returning to halakhic rulings: insofar as this concerns halakhic technocracy—that is, questions that in my opinion have no substantive meaning and are utter nonsense, such as an egg laid on a festival, or the question of questions: is it permissible to lie to the Waze app—I have no doubt that AI will be wonderful. Beyond that, AI may make it possible to transfer the decision-making process from the rabbi to the questioner—that is, he will receive broad information and a great many options that will allow him to make a decision and become his own posek, in accordance with his personality and needs, similar to personalized medicine tailored to one’s individual genome and proteome structure. Perhaps paradoxically this will דווקא make the process of personal study into something meaningful, and allow integration of extra-halakhic information / culture and morality, increased connectedness between people, and transfer the locus of control to the individual or the family. Maybe yes and maybe no.
Rabbi,
As I understand it, what artificial intelligence of the ChatGPT type (and perhaps all types) is based on is finding correlations. My question is whether, in the rabbi’s view, all human thought amounts to that. If so, it sounds plausible that artificial intelligence will replace a vast portion of the professions of knowledge and thought. If not, then in my opinion the question remains open.
The rabbi often mentions intuition as a tool that lies between sense perception (recognition of reality) and thought (analysis and understanding of reality). In the rabbi’s opinion, is such intuition an essential tool for reaching correct conclusions in halakhic decision-making? If so, does the rabbi think artificial intelligence could imitate it?
As for myself, I am torn about these matters. I should note that I am a PhD in physics (and a Yeruham graduate…), not a philosopher and not a computer scientist, so it is certainly possible that my description of artificial intelligence is not precise…
It is not finding correlations, at least not directly. It is a complex structure of a network that trains on a great many data. In the bottom line one might say that it finds correlations, but that is not its direct operation.
The next column will touch on this subject. I of course have no answer, and it seems to me that no one does.
What he writes here is for his own pleasure, and he writes about what is on his heart. Don’t like it? Don’t read it. And as for studies in academia—you need to make that claim about all teachers in academia.
And what is the difference between a teacher and an academic? (Other than that a teacher is closer to a babysitter.)
You’re definitely Haredi.
Maybe one can also ask: is our thinking only prediction based on the knowledge we possess, or is there something beyond that in it—a capacity to create “something from nothing” in a certain sense—and does that exist or not in machine learning?
That is really the previous question. As I said, I have no answer to that. I suppose there is something in us beyond implementing existing information. As I understand it, intuition is a cognitive faculty. I would be very surprised if there were software that succeeded in doing everything a human being does. But even if so, I am not sure I would be wrong in my conception. It may be that the structure of our brain is responsible for creativity, and if one succeeds in creating an artificial brain, then it too will have those capacities.
Will the answer be clarified more in the next column? (How software could produce results that require cognition and not only thought [creative], what role creativity plays here.)
I am not planning to enter into this specific discussion, if only because I have no good way to conduct it. In general, there is room to say that ideational vision looks at events and facts and sees through them the general ideas (the ideal forms). If there is some way to infer that from those (to choose the correct abduction and induction), then perhaps one can also build a machine that will do so.
Whoops.
By mistake I posted a new question.
Why am I not receiving by email the comments / answers to my questions?
“Acting according to the words of Beit Shammai” applies even in Torah-law cases where it is known that one must be stringent in doubt, and nevertheless one may behave according to one opinion when it is known that there is a dispute in the matter, and one who follows the stringencies of both is considered a fool.
We see that there is a law beyond the gathering of information.
I passed it on to Oren to check.
One must be stringent when one is in doubt. Not everywhere there is a dispute must one be stringent. The permission to follow one of them when you are in doubt is not connected to the authority of the sides. So too regarding AI, as long as it is a skilled decisor. It is permitted (though not obligatory) to rely on one who knows the material. This is entirely a matter of gathering information.
One who follows the stringencies of both is a fool when there is a contradiction between them.
I didn’t understand.
When there is a dispute between Beit Shammai and Beit Hillel in a Torah-law matter, the baraita says: one who acted according to the words of Beit Shammai has acted; according to the words of Beit Hillel has acted. That is—you can decide that you listen to a certain rabbi.
We also find that the people of the town of Rabbi Yosei HaGelili and Rabbi Eliezer practiced according to them (fowl cooked in milk and the preparatory acts for food preparation on a festival).
We clearly see a law of rabbinic authority beyond an ‘information aggregator,’ no?
The chatbot’s less well-argued conclusion can be influenced by many texts influenced by phobias (or perhaps it is its trick in order to release phobias). Still, the fascinating column lacks a treatment of the sociological implications of making the human halakhic authority superfluous, of captains of thousands and captains of hundreds and perhaps even more so captains of fifties. Something that certainly stood before the eyes of the Sages, such as in matters permitted where others nevertheless practiced prohibition, etc.
No. You can decide that you act according to a certain halakhic opinion (even that of AI), and not necessarily that you listen to a certain rabbi. And that too only when you have no position of your own.
When a town follows the opinion of the local rabbi, that is not relevant to our discussion. A mara de-atra has authority by virtue of communal acceptance.
I read an article by Rabbi Michael Abraham in which he discusses the question of artificial intelligence as a halakhic decisor. There he quotes Rabbi Yaakov Ariel’s words and argues with them, and these are his words:
I recently saw a short article containing Rabbi Yaakov Ariel’s comments about halakhic rulings by artificial intelligence:
“Gentlemen, first of all, as practical halakhah: artificial intelligence cannot issue halakhic rulings. Rather, halakhic rulings are issued according to common sense. A computer’s mind is not a human mind; even if it is very advanced and sophisticated, it is not human thought. Halakhah cannot be decided by the thought of a computer, only by human thought,” said Rabbi Ariel.
He added that “if artificial intelligence comes up with a halakhic ruling for us, that’s a little rabbi worth nothing. I’m not saying one cannot read what it says, but to issue rulings based on it? No. If the computer gives you a line of reasoning, and you give the opposite line of reasoning, yours may be more correct. One should not rely on it. The discussion is new and I still haven’t seen many results from this thing.”
“The computer is not authorized to issue halakhic rulings, whether it is kosher or not kosher. Rabbi Menashe Klein, author of the responsa Mishneh Halakhot, says that a Torah scroll that a person checked and made a mistake in is preferable to one that a computer checked and did not make a mistake in,” Rabbi Ariel concluded.
It is not clear whether he means to argue that there is a concern that the ruling will be mistaken, or to determine that in principle a computer has no authority to issue halakhic rulings. The concern that the ruling may be incorrect (that is, not in accord with [human] “common sense”) is an almost factual matter. That of course depends on the quality of the machine, but today machines are already approaching capabilities that can issue halakhic rulings. This can be tested experimentally. One can have a machine issue halakhic rulings in various cases and check the results by halakhic decisors. Thus, for example, one can test it on rabbinical ordination exams (it seems to me that דווקא in this type of exam the machines would not do badly; as far as I can tell, they do not test creative and original thinking there), and if it passes the tests then one may let it issue rulings. A human decisor who passed all the tests can also make mistakes. One must beware here of the phobias I described above.
If he means to say that it has no principled authority to issue halakhic rulings, on that I disagree completely. Not because of my great esteem for machines, nor because of slight esteem for human decisors. I argue that there is no such thing as “authority to issue halakhic rulings.” The decisors of our time, unlike the Sanhedrins or perhaps the Talmud, issue rulings only because they are experts in the matter. They have no formal authority, and their authority is substantive (see, for example, column 393 on the difference between the two). There is no obligation to obey them, but there is an assumption that their ruling is probably correct and one may rely on it. A machine too can reach such a state, and then there is no obstacle to letting it issue rulings.
The last paragraph, which mentions the words of the Mishneh Halakhot, clearly is not talking about concern over error. After all, he is speaking of a situation in which the computerized checker did not err and a human did err. So there is plainly a claim there about the principled authority of a computer in these areas. To my mind that is a truly bizarre statement. If the computer really arrives at the truth better than a human checker (and checking sacred texts is a classic task for a computer, far more than issuing halakhic rulings), why not let it do so and rely on it? Why is a mistaken human preferable to a correct computer? From where does he derive this appalling idea? It seems like one of the phobias I described above.
As long as halakhic decision-making is an attempt to analyze a problem and arrive at the correct halakhic answer, this is a task that can certainly be given to a machine. When we are in a situation where the rulings will receive the status of binding authority (for example, to create a Sanhedrin of thinking machines), then perhaps there is room for such a discussion.
I think Rabbi Abraham is mistaken. The situation in which the questioner comes to a rabbi, presents his question to him, and receives from him a permissive or prohibitive answer, definitely belongs to the domain of formal authority, and therefore one cannot appoint a machine to this role.
I will explain, with proof. The Gemara in tractate Avodah Zarah 7a states:
The Sages taught: If one asked a sage and he declared [the matter] impure, he should not ask another sage and have him declare it pure; if [he asked a sage] and he forbade, he should not ask [another] sage and have him permit. If there were two [sages], one declaring impure and one declaring pure, one forbidding and one permitting—if one of them was greater than the other in wisdom and number, follow him; and if not, follow the stringent one. Rabbi Yehoshua ben Korḥa says: in matters of Torah, follow the stringent one; in matters of the Scribes, follow the lenient one. Rav Yosef said: the halakhah is according to Rabbi Yehoshua ben Korḥa.
Halakhic disagreement is an integral part of the Torah tradition. Divergent reasonings and different opinions have always existed side by side, with every sage in Israel conducting himself according to his own position so long as a discussion has not taken place in the Great Court and been decided by majority against him. It is clear that a sage who, by virtue of his own study or understanding of the tradition of his teachers, reaches a lenient conclusion is not supposed to change his opinion or his practice because of his colleague who is stringent, even if the latter is greater than him in wisdom and number.
The baraita refers to a person who is not a sage, who has no opinion at all on the matter under discussion, whether because of lack of knowledge or lack of understanding, and he asks a sage, or in our case sages. When there is a dispute, he cannot rely on the lenient opinion, because he is aware that there is a prohibiting opinion. In such a case, detailed guidance is presented to him, itself disputed, but according to all views in a concern of Torah prohibition he must be stringent, whereas in a concern of rabbinic prohibition the halakhah is that he may be lenient.
How then is it that we nevertheless are not stringent in every halakhic dispute in the laws of selecting on Shabbat? How is it that we accept lenient opinions as halakhah?
The answer is supposed to be based on emphasizing the difference between a questioner asking two sages and a questioner asking one sage. The idea is that the Giver of the Torah also knew that not everyone would be able to clarify every law independently in a way that would allow him to ignore other and differing opinions, and therefore there is the mechanism of asking a sage, whereby the latter’s instruction becomes, as it were, the questioner’s own independent opinion, to the point of allowing him to rely on that opinion even leniently against other stringent opinions.
When in a certain area they accepted the rulings of the Rema, he became their rabbi. Therefore, even if the masses have no ability at all to decide his disputes with the Mechaber, they are still permitted to be lenient on the basis of his opinion even in matters of Torah law.
In the words of the Sages we find the concept of mara de-atra, which literally means “master of the place.” And indeed, in Rabbi Eliezer’s place they would cut wood to make charcoal to make iron on Shabbat for the sake of circumcision (Shabbat 130a), despite the more stringent ruling of his colleagues, because in his place all the townspeople were subject to him and his opinion, and they were entitled to rely on him without concern for other opinions even in a severe Torah prohibition.
The questioner who asks a sage accepts that sage upon himself as his rabbi, at least with respect to this question, and therefore he can rely on the sage’s lenient answer when he need not worry that there are other opinions that forbid. This is unlike one who asks two sages, whose two answers are equally valid for him, and therefore one must find a way to decide.
Of course, in order for us to be able to give force to the sage’s instruction, it must be an independent and formulated opinion. If the instructor is not obligated in the commandments and has no halakhic competence to rule even for himself, the questioner cannot rely on such an instruction.
Artificial intelligence can present all the data before the questioner. If the latter is capable and can decide for himself מתוך the tangle of methods and opinions what is correct for him, then it serves as a broad encyclopedia laying out all the information before the questioner, while he is the one who decides for himself. In such a case, it is clear that Rabbi Abraham is right and there is no obstacle to using its services insofar as in the future they overcome the problems of its reliability and the like.
But in the common case, where the questioner is not a sage and does not have the slightest understanding of the issues relevant to his question, presenting all the opinions before him will of course not help him. And even if the machine imitates the calculations of a halakhic instructor in Israel, and gives him a clear permissive ruling on the basis of the information before it, the questioner cannot rely on this ruling against other prohibitive opinions, because there is no instruction here. In order for the questioner to be lenient where there is one who forbids, one needs the decision of an instructor that is valid even for himself, and once that decision has the status of an instruction, the questioner can adopt it as though it were his own independent opinion.
It seems to me that this is what Rabbi Ariel meant in his words. Halakhic decision-making is carried out by the opinion of a sage who holds one side in the discussion, to the point that he himself relies on his position leniently. Therefore his questioners too, who are like his students, can rely on this instruction of his. The moment there is no sage’s opinion here, but rather an algorithm that uses calculations identical to those of the deciding sage, and therefore chooses to be lenient against the opinion of those who forbid, the user receiving the information is not entitled to rely on the lenient decision, because there are those who are stringent.
The instruction quoted in the name of Rabbi Menashe Klein is indeed puzzling in its present form. As I recall, the Mishneh Halakhot strongly opposed mechanizing proofreading in sacred texts because of various concerns, some understandable and some not so much. But reality proves that even today we do not rely only on computerized proofreading and still require human proofreading as well. This can be explained by the misses of the software, but this reason may disappear as the field develops.
But the truth is that our reliance on human proofreading in order to determine that the scroll is kosher is rooted in the fact that a person can rely on his own independent examination for himself. After he determines that, from his perspective, the scroll is kosher, his fellow too can rely on this determination even without rechecking. And this is the law that one witness is believed concerning prohibitions. A gentile, for whom the kashrut of the scroll is irrelevant, even if we neutralize the problem of reliability, we still cannot rely on his determination regarding the kashrut of the scroll, because we need a halakhic determination and not merely data, and such a determination can be issued only by a kosher Jew. And indeed, such a determination, even if mistaken, is valid—as long as the error is not known—more than a computerized determination that has no validity at all, even if it happens to be correct.
Even so, I would not rush to adopt Rabbi Menashe’s instruction. For all that we need proofreading for is only because of the doubt we are in lest the scribe erred, as is the way of human beings. If the computer were to become developed without any misses, to the point of a presumption of complete reliability, we could, through computerized proofreading, remove ourselves from doubt, and then we would not need the human halakhic determination.
Hello.
Completely absurd. Everything he brings there is irrelevant to his own discussion.
The discussion about a sage who forbade concerns the honor due to the sage who ruled, and I assume with him that there is no obligation to accord honor to software. So what? That only means that if it instructs me, I can turn to a sage or to another program and ask again. Does that mean it cannot issue a halakhic ruling, or that one cannot rely on it? What is the connection?
The core of his argument is nothing but begging the question in a simple way: he assumes that the instruction of artificial intelligence cannot contradict another stringent opinion. But from where does he get that? Where did he derive this thesis from? Not from the source he brought and not from the discussion he conducted בעקבות it. Moreover, usually artificial intelligence will instruct the questioner in accordance with the opinion of a lenient human sage, since there are many sages and many opinions, and if it is properly trained it will answer in accordance with an accepted halakhic opinion. In short, I do not see here any relevant argument.
It’s a shame the rabbi doesn’t walk around Bnei Brak. There are already posters in the streets there saying that this software is even worse than the internet, God forbid, and I read that the Bobov Rebbe from abroad said that this is the sin of the Tree of Knowledge, and the words are wonderful.