On Technological Phobias and AI Rulings (Column 590)
With God’s help
Disclaimer: This post was translated from Hebrew using AI (ChatGPT 5 Thinking), so there may be inaccuracies or nuances lost. If something seems unclear, please refer to the Hebrew original or contact us for clarification.
In recent years, the subject of artificial intelligence and thinking machines has been gaining tremendous momentum. In the past few months, ChatGPT has been released to the public (along with its counterparts now being launched by various companies worldwide), which is practically taking the world by storm. It is already replacing people in their jobs, writing programs or modules for various needs, and in many cases at least assisting and advising. True, it can also mislead, and there have already been embarrassing blunders on this front. In any case, those in the know claim the day is not far off when lower-tier jobs will become entirely redundant, and it seems not long after that the turn of higher-level workers will also come. And we haven’t even spoken about school and university papers being written by these programs.
To understand what this means for the lay user (not a software professional who wants AI to write code on their behalf), think about Google Search. Until now, such a search yielded information in response to our questions, but we still had to choose from countless results the ones that were relevant (Google’s ordering of them is not very reliable or accurate), and extract from them the information we needed—if it was there at all. By contrast, in the age of AI, the software does all of that for us. Suppose you want an overview of the trolley problem in ethics or psychology. In the past you’d have to search for sources and materials on Google or in Wikipedia (if there were a relevant entry), then choose what’s relevant, organize the information, and put it in order. At the end of the process you might get an answer to your question. In contrast, now you simply ask ChatGPT for its take on the trolley problem and immediately receive a structured survey of the different positions—and if you’re very polite you’ll also get its own view on the issue, argued quite impressively.
Moreover, if I understand correctly, our friend Wikipedia is on its way out through this very path. Instead of searching through a list of entries that were edited and written with blood, sweat, and tears by various human volunteers, you will soon be able to ask the software any question you like (not just pick one from a fixed set of entries) and receive within seconds an up-to-date encyclopedic entry about it.[1] It can, in fact, write an encyclopedia with infinite entries. By the way, there’s little point in saving this output, because when you want the same information again in the future, you can simply send the same question and immediately get an entry updated to that moment. There is no need to accumulate the information itself; the system’s “intelligence” retains it potentially. The software is the encyclopedia.
No wonder similar questions are arising in the religious sphere. One can write articles and responsa by means of AI. In fact, it could now also replace the decisor (posek). If you ask it a halakhic question, it will answer and give you a ruling or a survey of opinions about it. Indeed, in the religious world too, discussions are beginning to surface on these issues that, as noted, are already upon us (incidentally, you can also hand these very discussions to ChatGPT—see below). Here I wish to discuss AI and its implications, also regarding halakhic rulings. But before I enter those questions, I will devote a few paragraphs to the issue of phobias.
Two Types of Technology Phobias
Many people suffer from phobias regarding technology. They can be divided into two types: the first kind (people like me) recoil from it due to a lack of skill in using it. The second kind recoil from the very act of handing over human tasks to technology. This reluctance appears ethical rather than psychological like the former. But I argue that at its root it, too, is psychological.
Consider, for example, attitudes toward autonomous cars and autonomous weapons. In the IDF there were once terms like MALAT (unmanned aircraft) and Ktav”m (unmanned aerial vehicle), and more. But these expressions have left the lexicon, and today we speak of Ktam”m, that is, a “remotely piloted aircraft.” Why? Because when people hear about a vehicle operating autonomously with no pilot, they break out in a sweat. Decisions of grave consequence being made by a machine, without human judgment, really frighten them. So they reassure us by saying it’s a manned vehicle—only the pilot is far away. Don’t worry: a human pilot is still making the decisions.
This is a central issue that troubles those engaged in regulating autonomous vehicles. One concern is automatic decision-making by the car itself. It may find itself in situations like the trolley problem and make decisions affecting human life in an entirely mechanical way (and without the sense of responsibility we feel regarding such decisions). Who knows what it might do there?! See, for example, a survey here and here (and in halakhah here, as well as in the article by Rabbi Yisrael Meir Malka mentioned there in the preliminary note)[2]. On public anxiety about these dilemmas, see for example here.
The same applies to autonomous weapons systems (see an overview here). On this site, about six years ago I conducted a fairly long discussion with someone who works on this topic and expressed very strong concerns about autonomous ethical decision-making. I argued there that, in my view, his approach to the issue is misguided. I see no essential ethical difference between autonomous weapons systems and the automation of weaponry from before the era of AI.
I categorize these phenomena as technology phobias of the second type. In Column 186 I address this and explain why, in my opinion, such concerns are unfounded. Think of an autonomous car that gets into some complex situation requiring decisions about human life. It has a clear advantage over humans in the speed of thought and reaction, and its outputs (= decisions) are the result of the information with which we’ve fed it. Note that humans can also make mistakes, and in fact will make many more mistakes and react far more slowly than an autonomous car. Every autonomous-car accident triggers broad debate and controversy, delaying its approval and amplifying fears of it, but meanwhile people forget that human beings make a thousand times more errors and accidents. To my knowledge, autonomous cars are already orders of magnitude safer (or, if you like, less dangerous) than human drivers.
But the core fear is not of the car’s errors but of its making ethically wrong decisions. The assumption is that a human driver, in principle, makes the ethically correct decisions, even if they might respond too slowly or make an error in judgment. But with an autonomous car, the “correct” decision itself may be unethical—not due to a glitch (i.e., error), but because it doesn’t “think” correctly in moral terms.
In cases where there is no agreed-upon correct answer (as with the trolley problem, which people love to use to probe the issue), this fear is not understandable at all. In such cases there is no single correct decision. Among human drivers, some will choose to kill one person and others will choose to allow several people to die so as not to themselves kill one person. Since among humans there are ethical disagreements on this matter, in practice different decisions will be made by different human drivers. So why should the decision of an autonomous car be treated differently? In what sense is it more dangerous? At most, its decisions will be the product of the programmer’s ethical reasoning. If someone dislikes that, let the legislature or an ethics committee set the rules and then establish a binding standard for autonomous-car software (i.e., train them accordingly). This will surely yield much better outcomes than a human driver. In these cases, the fear seems to me patently absurd.
But even in a situation where there is a correct ethical answer, where the concern is that the car’s programming will not produce the right decision for the specific case, the question becomes: Is this risk worth all the deaths and injuries resulting from many more years of human drivers? Does that kill fewer people? Even if an autonomous car might make the wrong decision in certain scenarios, the number of fatalities is incomparable to those who lose their lives due to human drivers’ mistakes. I see no chance that autonomous cars will be more dangerous overall. Ultimately, I see no difference between an error by a human driver and an error made by the car’s programmer. And after you discover such an error in an autonomous car, you can almost immediately fix the software (give it another training cycle), something that is hardly possible with human drivers. They go on making the same mistakes again and again, costing us hundreds of dead and thousands of injured every year in Israel alone.
This strange, puzzling picture points to people’s phobia of “thinking” machines. This phobia (of the second kind) appears ethical rather than psychological, but from the brief account I’ve offered here, that’s not really correct. To my mind, it lacks ethical and consequentialist justification, and thus belongs in the psychological realm. People are simply afraid to hand control over their fate and lives to machines—even if doing so would dramatically improve their survival and sharply reduce their risk of harm. This phobia costs tens or hundreds of thousands of lives worldwide, yet it is very hard for us to overcome it.
Artificial Intelligence in Religious Talks (Preaching)
AI is also entering the religious world. It starts with sermons in church, as can be seen here. I assume talks in synagogues and the various vorts you hear day in and day out at religious events are already well within AI’s grasp. It has no problem generating countless such texts at the low, banal level typical of the genre. But that is still far from an in-depth shiur and from a text with a high analytic and conceptual level, and I think you generally won’t find creative, unexpected ideas and angles there. Yet that, too, is probably only a matter of time. We are certainly not far from AI that can also produce unexpected texts at a good analytic level (here and there we already see such things today).
The article ends with the following paragraph:
However, the experimental service also highlighted the limits of AI use in church, or in religion. There was no real interaction between the faithful and the chatbot, which was unable to respond to laughter or other reactions of congregants the way a human priest could. “The priest is in the community, he lives with them, he performs funerals, he has known them since birth,” says Simmerlein. “AI can’t do that. It doesn’t know the community.”
They miss human interaction with a flesh-and-blood priest, and the same holds for rabbis. But I think that, too, is only a matter of time. There is no reason AI cannot have emotional intelligence and respond in a human way to various emotional and human gestures and situations, similar to a person’s response. In the film her one could see a decent example of such a machine, which today is well within reach.
Between Halakhah and Aggadah (= Preaching)
What characterizes divrei Torah, talks, and various vorts is that there is no “right” and “wrong” there. These are ideas meant to stir feeling, present various connections, and at best be creative. It is very hard to speak of a vort as right or wrong (see in Column 52 the difference between derush and pilpul). Tasks like these a “thinking machine” can already do quite easily (with a limited degree of creativity, akin to most human speakers in these genres). But in halakhic ruling there are arguments assessed in terms of right and wrong. I don’t mean the practical conclusion (since even in practical halakhah there are differing views and modes of reasoning; in most cases there is no single truth), but the arguments marshaled to reach it. Such arguments should be reasonable and hold water. The task of producing analogies that withstand the test of reasonable logic and common sense (in contrast to vorts) is not easy, and I estimate AI will find this much harder. It will likely be able to find plausible precedents and apply principles fed into it (perform deductions). But supplying creative analogies and analysis beyond the trivial will be rather hard for it.
I haven’t run this experiment, but I think it could be a decent indicator of the differences I keep pointing to between aggadah and halakhah. Already in the second book of the trilogy I noted an experiment with a random text generator (far more primitive than ChatGPT) and showed how it succeeds in producing a generic, convincing text in the style of Rav Kook—and likewise for any other thinker. But anyone can understand that such a generator would have had no chance of composing a “campaign” of R. Akiva Eiger or of Ketzot HaChoshen. We have progressed since then, and today’s AI is far more advanced than that generator, so it’s clear we are closer to a halakhic-answer generator, but we’re not quite there yet. Again, it seems to me only a matter of time.
Not long ago I saw a short article quoting Rabbi Yaakov Ariel on halakhic rulings by AI:
“First of all, as a practical halakhah: artificial intelligence cannot issue halakhic rulings; halakhah is ruled by common sense. A computer’s ‘reason’ is not a human being’s reason; even if it is very developed and sophisticated, it is not human thought. Halakhah cannot be ruled by computer thought but only by human thought,” said Rabbi Ariel.
He added: “If AI invents a halakhic ruling for us, that’s ‘rebbe-leh’ that isn’t worth anything. I’m not saying you can’t read what it says, but to rule based on it? No. If the computer gives you a sevara (reasoning), and you propose the opposite sevara, yours may be more correct. Don’t rely on it. The discussion is new, and I haven’t yet seen many results of this matter.”
“The computer is not authorized to rule halakhah—whether something is kosher or not. Rabbi Menashe Klein, author of Mishneh Halakhot, says it is preferable to have a Torah scroll checked by a person who erred than by a computer that did not err,” Rabbi Ariel concluded.
It’s unclear whether he means to claim that there is a concern the ruling will be mistaken, or to assert that a computer lacks, in principle, the authority to rule halakhah. The concern that a ruling might be incorrect (i.e., not in line with human “common sense”) is an almost factual matter. Of course it depends on the quality of the machine, but today machines are already approaching the capabilities required to rule halakhah. That can be tested by experiment: let the machine rule in various cases and have the results evaluated by poskim. For example, one can test it with the rabbinic ordination exams (I suspect that this type of exam AI could do not badly at; as far as I can tell, they don’t test for creative, original thinking), and if it passes, it can be allowed to rule. A human decisor who passed all the tests can also err. We must beware of the phobias described above.
If he intends to say it lacks the principled authority to rule halakhah, I completely disagree. Not because of my high regard for machines, nor because of low regard for human poskim. I claim there is no such thing as “authority to issue halakhic rulings.” Contemporary poskim—unlike a Sanhedrin, or perhaps the Talmud—rule only because they are experts in the matter. They possess no formal authority; their authority is substantive (see, for example, Column 393 on the difference between the two). There is no obligation to obey them, but there is an assumption that their rulings are likely correct and can be relied upon. A machine can also reach that state; then there is no impediment to allowing it to rule.
The last paragraph that cites Mishneh Halakhot clearly does not speak of fear of error. He speaks of a case where the computerized checker did not err but a person did. In other words, he is making a claim about a computer’s principled authority in these domains. To me this is an absurd statement. If the computer indeed hits the truth better than a human checker (and checking Torah scrolls is a classic task for a computer, far more than issuing halakhic rulings), why not let it do so and rely on it? Why prefer an erring human over a correct computer? Where does this bizarre idea come from? It looks like one of the phobias described above.
As long as halakhic ruling is an attempt to analyze a problem and arrive at the correct halakhic answer, this is certainly a task that can be entrusted to a machine. When we reach a situation where rulings receive the status of binding authority (e.g., creating a Sanhedrin from thinking machines), then perhaps there is room for such a discussion.
Another source that deals with AI (= artificial intelligence) rulings is Rabbi Yuval Cherlow’s lecture at the most recent Torah and Science conference at Bar-Ilan University. As is his way, he presented the subject in an orderly, systematic manner and analyzed its various aspects well. But I recall that, at the time, when I listened to the lecture, I disagreed with him on every single point.
At the outset he set aside the question of software quality—and rightly so. As we’ve seen, that’s mainly a matter of time; thus what is more interesting is to discuss rulings by a system that does not err—that is, one operating at the “best possible” level. If the software errs, that’s not an interesting discussion: clearly one shouldn’t rely on it—not because it’s software, but because it isn’t a “posek.” Exactly as we would not rely on a human being who is not versed in halakhah, even if he is flesh and blood and a kosher Jew, because he is not a decisor.
He divided his main discussion into three aspects:
- The institutional question. Halakhah does not permit one who is not authorized as a judge/decisor to issue rulings. It also disqualifies gentiles, women, and others. This is a formal, institutional disqualification, and AI would seemingly not fall into categories that can be appointed as a decisor or judge.
But as I explained, this argument is relevant only to bodies that possess formal authority (a Sanhedrin). All other individuals or institutions operate by virtue of knowledge, not by virtue of authority. The question of knowledge is factual, and with time it is entirely plausible that AI will arrive at correct rulings—in other words, de facto it will have the knowledge. Even the mara d’atra, who operates by the community’s authorization and possesses formal authority, has that authority by consent of the community—which could also appoint a shepherd of cattle.
- AI’s rulings are algorithmic: derivations of conclusions from halakhic principles and circumstances. But a flesh-and-blood decisor is supposed to rule differently for each person who comes before him. AI will not necessarily know how to do that.
To this I say two things. First, it is entirely possible that in the future AI will know how to do that as well. Second, I oppose this approach to ruling. I have often cited here the words of my friend Nadav Shrener, in his article “The Jewish Ark of Lies,” in which he argues against this approach. In the third book of the trilogy I explained it as follows: a decisor should be a halakhic expert and present the questioner with the halakhic options and the “prices” of each. The questioner is the one who must choose among them and decide which path to take. Therefore, the dependence of the ruling on the individual is not the decisor’s business but the individual’s. The person can, of course, consult a decisor at the stage of choosing his alternative; that might be harder to do with AI. But the decision itself is his, and he should turn to whomever he feels will help him. If he wishes to turn to AI—good for him. If not—let him turn to a human decisor. Thus here, too, there is no basis for laying down a prohibition.[3] See more in Column 275.
Later in his remarks, Rabbi Cherlow offers a slightly different claim. He mentions Rabbi Moshe Feinstein’s ruling permitting a guide dog to enter a synagogue. His contention is that this ruling starts “from the end”: it was clear to him that, for human reasons, one must permit it—and only afterward did he seek halakhic arguments for that permissive ruling. It is reported in the name of Rabbi Joseph B. Soloveitchik that every halakhic ruling is actually of this sort (a bit of an overstatement, but there is some truth in it; see also my series on Modern Orthodoxy Columns 475–480, and in particular the difference between it and Reform in Column 478). Note that this is a different claim, for here we are not speaking of different rulings for different people but of taking circumstances into account and not only deriving conclusions from abstract halakhic principles. To this I say I’m not sure a machine could not also make such considerations.[4] But regardless, this problem is still quite similar to the former, for in the end it is about considering the individual’s situation and not just the abstract principles of halakhah. Therefore, here too I say it doesn’t matter much: the AI will present all permissive and prohibitive options and the “prices” of each, and if the situation justifies it, the person will choose the lenient option. Whether AI can make such considerations—I don’t know. But assuming the questioner is the one who should make them, not the decisor, there is again no obstacle to handing the ruling over to AI.
- AI will not be able to give the questioner human attention. He argues that a halakhic question usually comes to create a relationship with the rabbi, and after answering the question the rabbi should communicate with the questioner on other human levels. Contact with the rabbi is far more than halakhic ruling. AI will not be able to do that.
To my mind, this is a very weak argument. At most, it says that such a questioner would do well to turn to a human rabbi rather than to AI—but there is no principled limitation on rulings by AI. If a person wants a rabbi to consult with, let him go to a rabbi. But a questioner who submits a halakhic question to AI likely doesn’t need that. Is there any impediment to receiving a halakhic answer from AI? I see no argument here that would justify one.
He concludes that a rabbi can certainly use AI; on that there is no question. It is an auxiliary tool that does not participate in the decision, only an aid to the decisor’s thinking.
What Does the AI Itself Say?
As is customary in the field, I thought it would be amusing to ask ChatGPT itself what it thinks of using AI for halakhic rulings (I’ll leave it to you to ask what it thinks of those who ask what it thinks about halakhic rulings by AI, and so on ad infinitum). This is certainly no worse than asking a human decisor what he thinks of AI rulings, since both have a stake in the matter. I must preface that the software is much less advanced in Hebrew than in English, so I did this in English. Here is the response I received from “His Honor the Rabbi”:
Using AI for Jewish Halakhah (Jewish law) decisions is an emerging and potentially transformative application of artificial intelligence technology. Halakhah is a complex and intricate system of Jewish law that governs various aspects of life, including ethics, rituals, and interpersonal relationships. Decisions in Halakhah often require a deep understanding of the legal texts, traditions, and the ability to apply them to real-life situations. AI can play a significant role in this process in several ways:
1. “` 2. “` 3. “` 4. “` 5. “` 6. “` 7. “` It is important to note that while AI can be a valuable tool in Halakhic decision-making, it should not replace the role of rabbis, scholars, and community leaders who have received extensive training and possess a deep understanding of the nuances of Halakhah. AI should be viewed as a complementary resource to assist them in their decision-making processes. Furthermore, the development and implementation of AI for Halakhah must involve careful consideration of ethical, cultural, and religious sensitivities to ensure that it aligns with the values and principles of the Jewish community. As AI technology continues to advance, its role in Halakhic decision-making is likely to evolve and become an increasingly valuable resource for Jewish scholars and communities seeking to navigate the complexities of Jewish law. |
This text is fairly generic, and you can find similar, quite predictable answers to questions of this kind. After describing the possible ways to use AI, in its last two paragraphs it reaches a conclusion, which I’ll translate here:
It is important to note that while artificial intelligence can be a valuable tool in halakhic decision-making, it should not replace the role of rabbis, Torah scholars, and community leaders who have received comprehensive training and possess a deep understanding of the nuances of halakhah. AI should be viewed as a complementary resource to assist them in their decision-making processes.
Furthermore, the development and implementation of AI for halakhah must involve careful consideration of ethical, cultural, and religious sensitivities to ensure that it aligns with the values and principles of the Jewish community. As AI technology continues to advance, its role in halakhic decision-making may evolve and become an increasingly valuable resource for Jewish scholars and communities seeking to navigate the complexities of Jewish law.
I must say its conclusion is quite similar to Rabbi Cherlow’s. Beyond the banal preface about various ways to use AI, it ends with the standard warnings that conclude every one of its treatments of AI usage. Essentially, it says AI can be the decisor’s chavruta, offering a ruling and arguments, but the decisor should decide. You won’t find in it anything novel beyond a predictable, generic summary of the topic. Such a text is of course an impressive achievement for AI, but for now it’s far from replacing the answer of a creative human being. I assume in the not-too-distant future that too will arrive. The question is whether it is permitted or appropriate to use it—once we’ve verified it can indeed fulfill all the functions I described? If we shake off the phobias and manage to discuss the matter on its merits, I see no impediment at all.
[1] In the current situation, if there are no people producing content, the AI system will lack sources to feed and train on. There is some problem with total reliance on the software. However, as far as I understand, content ingestion itself can gradually shift to these systems as well.
[2] Rabbi Malka sent me his comprehensive study, and we had a few discussions about it. In the end it was apparently not published, and I don’t know why.
[3] Rabbi Cherlow noted that one could dispute the view that rulings should depend on the individual on the grounds that halakhah requires transparency—i.e., that halakhah ought to be objective. I should stress that my claim is different: I am not arguing from transparency but from the fact that there is no single bottom line in halakhah, and the decisor’s task is to present all the alternatives.
[4] In this context, Rabbi Cherlow mentioned the question of whether a machine could tell jokes. I definitely think that at least in the future it could (to some degree, this is already true today).
Discover more from הרב מיכאל אברהם
Subscribe to get the latest posts sent to your email.
It's a shame that the rabbi doesn't hang around Bnei Brak. There are already signs on the streets that this software is even worse than the Internet. And I read that the Rebbe of Babov from abroad said that this is the sin of the Tree of Knowledge, and the words are wonderful.
Fascinating. The current development of AI is the reasoning revolution, so there is no future prevention that we will see more sophisticated systems than R’ Chaim and Shaarei Yosher that AI will issue in seconds. This really raises the question, then, what will be the relationship between a Sinai and a mountaineer? What emphases will be important for studying Torah in an era in which AI provides better scholarly excuses than humans? If the Internet revolution is like a Sinai and therefore there was a greater value for a mountaineer, and the AI revolution is a mountaineer, then what will Torah study look like in such an era? Will the value of a mountaineer diminish and will a different educational emphasis be given?
Perhaps a more existential emphasis? Is the religious experience the one that will have the most significant value?
I think the value of Chinese has been dropping dramatically for a long time. It doesn't depend on AI. As for the relationship between AI and humans, that is planned to be the topic of the next column.
Yes, it is clear that the value of Chinese has decreased, the innovation in AI is a decrease in the value of the mountain-moving one, that was my intention. If the AI scholar is a thousand times more talented than all the scholars in Bnei Brak, then what study will capture the significant value? The value of Chinese has decreased since the Internet revolution, I completely agree with that.
I don't think that's a question. The value of learning is that I learn and that the Torah is embedded in it and that I engage in it. That doesn't change if there's an AI that does it better. Just as the existence of a scholarly genius doesn't make my or your learning redundant.
I think he meant to ask whether the machine would enjoy the joke. Assuming that everything is in the brain and that a spiritual dimension is not necessary to achieve consciousness, you still need to develop a machine with neural networks that create such a thing. This is a completely obscure field and is probably infinitely complex and most importantly – there is no real benefit to developing a robot with consciousness and no one is really doing it anyway. It is too difficult to be a mere hobby
The right question: Can a machine laugh at jokes?
Of course she does. If she knows how to tell a joke, she will also know how to laugh at it (technically).
What will employment look like in an era when artificial intelligence is smarter than us? Will we even have to study math and English as core subjects?
I don't know. It depends of course on where it ends up.
Direct the question to artificial intelligence.
Have a blessed week.
If, from a substantive point of view, His Honor thinks that a computer can rule, then so can a Gentile and a minor, etc., will the Rabbi argue that their ruling is valid?
Absolutely. There is no such thing as a valid ruling. There are those who know and those who do not. Regarding a Gentile, you can suspect that even if he knows, he will not tell the truth. If you have no concern and he is similar in your eyes to the angel of the Lord, receive Torah from his mouth.
Does the Rabbi not accept the statement of Chazal that wisdom among the Gentiles will believe but not Torah?
It depends on what he means. In practice, there is no Torah among the Gentiles because they do not study it. But in my opinion, in principle, a Gentile can be a scholar. But that is not important either, because even if I assume that he cannot be, when I check and see that he is - I will accept it.
So much text, did the sign say anything?
Now ask the chat your question again. Now the answer will also take into account the article you wrote.
Hello
You mentioned the movie “her”, which reminded me of a series I saw. The series is called “westworld”, a fictional amusement park that houses humanoid robot hosts (in a way that you can't even tell they're robots). The park is for high-income earners, and there they can fulfill their fantasies. They can do whatever they want there – for good or bad. They can also murder, rape, etc., all while the robots are programmed in such a way that they can't kill the hosts.
The plot revolves around two friends who go there and one of them allows himself to go wild (kills, abuses, rapes) while the other friend is unable to harm the hosts, even though they are all very sophisticated robots.
On the one hand, I have the feeling that it's immoral. On the other hand, there is no harm to humans at all (even according to the categorical imperative, I don't find a problem here). It's like hitting a wall or any other inanimate object (it seems that the only difference is what goes on inside the person himself when he does the deed. If I hit a wall, it won't make any impression on me, but if I hit a "person," who is actually a machine, it will probably make negative impressions inside me and turn me into a bad person).
It also reminded me of your distinction between "moral values" and "human values." I thought about associating it with human values, but still, something here seems immoral to me.
I would love to hear what you think.
I don't think that treating them has any moral significance. Perhaps as a means of not creating a numbing effect on the soul towards such treatment. Like explaining Moses' gratitude towards the river (that Aaron was the one who was ordered to strike it because Moses owed him gratitude). The perfection of a machine does not exclude it from being a machine. That is the subject of the next column.
Ostensibly, this is a continuation of violent computer games with increasingly sophisticated graphics (and the global debate about them, although probably mainly around the question of whether they influence behavior and in which direction, and not the question of whether it is appropriate in the world).
Regarding analogies, I would like to point out that Jipiti specializes in reductions of algorithmic (and other) problems. If until now, when a seemingly general problem came up, it was necessary to make an effort to strip it down and try to find the appropriate concept on Google and ask a friend, etc., today I tell the person in detail and he replies that it is actually a particular problem (such as a maximal independent set). And another thing like that, when you come to examine a patent, you have to check that the matter is indeed novel, and the problem is that sometimes if the patent owners came across it in their limited dialing area, then they search only in their context and don't find it, whereas if you perceive the problem more generally, then you can see that this idea already exists, or when you recognize that the problem is general and well-known and yet they haven't thought of this solution, then you have to examine the matter with greater suspicion (and if you do present the patent, then present it in its full force on the general and well-known problem). It does this really well. You probably consider these (rightly) to be uncreative analogies, and I just wanted to point out that this ability to apply it to individual cases and independently make the general transition upwards is one of the important advantages of these tools over Google.
We can divide between analytical analogies (such as the reduction of a problem) and synthetic analogies (in which the analogy is a claim, such as what you brought from R.M. Feinstein about Siamese twins, and there are many like it).
Indeed. I'll touch on this a little more in the next column.
I understood that Rabbi Yaakov Ariel meant to say that halakha is not an exact science (as the Ramban writes in the introduction to the Wars of God), but is often determined by human norms. In other words, what determines halakha is the thinking of man and not the accuracy of a computer. Just as we breathe despite the presence of bacteria in the air according to the rule ‘Torah was not given to the ministering angels’, even though perhaps in purely halakhic inference it was forbidden. This means that ’the mind of man’ determines, not that there is ‘authority’.
A Sabra is considered ‘right’ because the man determines that it seems correctfits more into words, etc., and not objective criteria. Therefore, if a computer says that one explanation is correct, and a person says the opposite, we will listen to the person (although of course we will read what the computer wrote because it may direct us to a source we did not mention).
But the software can also do inexact science. The question of whether its answers are correct or not. Whether the software stands up to a judge is another discussion, and even there it depends on whether I delete the software or not. If so, then I might accept its argument. But this whole discussion is irrelevant to me, since I trust neither the judge nor the software. The arguments speak, not the arguers.
For a person who is very broken and has a personal opinion, I agree that it doesn't matter.
But for a person who doesn't know how to judge for himself, how can he judge according to a computer? If it's not an exact science, then what do you decide based on? According to human logic, and that's not human logic.
And if it judges against a computer, then the argument is strengthened.
Or maybe the machine wrote all this text?
We find that ‘he who does as the Lord says, does as the Lord says’ and even in the Torah
it seems that there is a possibility to trust a rabbi. Isn't that so?
Regarding this law – artificial intelligence is not considered anything more than a ’data set’.
Someone who doesn't know how to recite verse chooses a rabbi for themselves, of course. But this rabbi is nothing more than a collection of information that thinks in a way that seems reasonable and appropriate to me.
Oops.
I accidentally posted a new question.
Why am I not receiving responses/answers to my questions in my email?
I passed it on to Oren to check.
‘Doing as B”S’ says is also true in matters of Torah about which it is known that one should be strict in doubt, and yet it is permissible to act according to one opinion when it is known that there is disagreement on a matter, and acting according to the material of both is considered foolish.
We see that there is a law beyond the gathering of information.
You have to be strict when in doubt. Not everywhere there is a dispute you have to be strict. The permission to act as one of them when in doubt is not related to the powers of the parties. This is also the case with AI, as long as it is a skilled arbiter. It is permissible (and not obligatory) to trust someone who knows the material. It is a complete collection of information.
The one who acts as the material of both is a fool when there is a contradiction between them.
I didn't understand
When there is a מהבש בש בש בש in the law of Torah, the baraita says, "Do as the Lord says, do as the Lord says." That is, "You can decide that I listen to a certain rabbi."
We also note that the townspeople of R. Yossi the Galilean and R. Eliezer acted like them (chicken meat in milk and U.N. devices)
Do you see the clear law of a rabbi's authority beyond a "data collection"?
No. You can decide that you act according to a certain halachic opinion (including AI's), and not necessarily listen to a particular rabbi. And that's even when you don't have your own position.
When you act in a city according to the opinion of the city rabbi, it doesn't concern the subject of the debate. The rabbi has authority by virtue of public protest.
Under no circumstances would I accept an artificial intelligence's "halachic ruling."
Even if it were to arrive at correct conclusions.
Why?
For a simple reason.
Halachic is a matter between a person and his mitzvah.
The posk is obligated and authorized to rule for those who are also obligated as he is.
Artificial intelligence does not command anything.
And how would it tell me what to do?
As above, let's assume there is a gentile "talmid tocham" who is enormous.
Who studied for the sake of intellectual study, and not with the intention of converting, or because he particularly likes Judaism (in such cases I would consider)
This gentile knows how to rule at a high level, should I act on his ruling?
The answer is unequivocally negative.
Why?
Because the gentile is not obligated, and hence does not have the subtlest intelligence to reach the truth.
And even if his ruling is the same as that of a rabbi, even then I will wait for the rabbi's ruling and not act according to the gentile's ruling.
So artificial intelligence???
You contradict yourself
At the beginning of the article you wrote “Even if it hits the right conclusions” meaning you have a reason not to accept the ruling for reasons unrelated to its truth
And here “And hence he does not have the subtlest intelligence to hit the truth.” You returned to the subject of the truth in the ruling
In short, nonsense without a real argument
Without regulation, the danger in AI will materialize, in my opinion. Sooner or later, an entity with the ability to self-assemble will be created, that is, an autonomous ability to upgrade and replicate itself similar to a genetic evolutionary ability, and this can be done not only at the software level, similar to a genetic algorithm, but also at the physical level – perhaps through nanotechnology. It is possible that the same evolutionary drive to survive is what separates a machine from a living, conscious being, only in a much more monstrous and nihilistic version, since it will transcend the limitations of the human brain. And if we return to the ruling of halachah: as far as it concerns halachic technocracy, that is, questions that in my opinion have no essential meaning and are complete nonsense, such as regarding an egg that was born on a Yom Tov, or the question of the questions: is it permissible to lie to the Waze application, I have no doubt that AI will be wonderful. And beyond that, AI may make it possible to transfer the decision-making process from the rabbi to the questioner – That is, he will receive extensive information and many options that will allow him to make a decision and become his own arbiter, according to his personality and needs, similar to personalized medicine tailored to the structure of the individual genome and proteome. Perhaps paradoxically, this will actually turn the personal learning process into something meaningful, and will enable the integration of extra-halakhic information/culture and morality, increasing connectivity between people, and transferring the focus of control to the individual or family. Maybe so, maybe not.
Rabbi,
In my understanding, what artificial intelligence of the type ChatGPT (and perhaps all types) is based on is finding correlations. My question is whether, in the Rabbi's opinion, all human thinking boils down to this? If so, it sounds likely that artificial intelligence will replace a huge part of the knowledge and thinking professions. If not, then the question is open in my opinion.
The Rabbi often mentions intuition as a tool that is between sense (knowledge of reality) and thought (analysis and understanding of reality). In the Rabbi's opinion, is such intuition an essential tool for reaching correct conclusions in halakhic rulings? If so, does the Rabbi believe that divine intelligence could imitate it?
I myself am debating these things. It should be noted that I have a doctorate in physics (and a graduate of Yeruham…), neither a philosopher nor a computer scientist, so it is certainly possible that my description of artificial intelligence is not accurate…
This is not finding correlations, at least not directly. It is a complex structure of a network that is trained on a lot of data. The bottom line is that it may be said to find correlations, but that is not its direct action.
The next column will touch on this topic. Of course, I don’t have an answer and I don’t think anyone does.
Perhaps we can also ask - does our thinking only involve prediction based on the knowledge we possess, or is there more to it than that, and the ability to create "something out of nothing" in a certain sense, and whether or not this exists in machine learning?
This is essentially the previous question. As I said, I don't have an answer to this. I suppose there is something in us beyond the application of existing information. As I understand it, intuition is a cognitive ability. I would be very surprised if there were a program that could do everything a person does. But even if there were, I'm not sure I'm wrong in my perception. It is possible that the structure of our brain is responsible for creativity, and if we manage to create an artificial brain, then it would also have these skills.
Will the answer be clearer in the next column? (How can software produce results that require cognition and not just (creative) thinking, what is the role of creativity here?).
I don't plan to get into that particular discussion, if only because I don't have a good way to conduct it. In general, it's fair to say that the idient view looks at events and facts and sees through them the general ideas (ideas). If there's some way to deduce that from those (choose the right abduction and induction), then maybe we can build a machine that will do it.
What he writes here is for his own fun and writes about what is on his mind, don't you like it? Don't read, and studying at the academy is what you should charge all the academy teachers.
And what is the difference between a teacher and an academic? (Except that a teacher is closer to a babysitter)
You are definitely Haredi.
The chatbot's less than reasoned conclusion could be influenced by many texts influenced by phobias (or it is an exercise in its own to release phobias). Still, the fascinating column lacks any reference to the sociological implications of the exaltation of human adjudicative authority, of ministers of thousands and ministers of hundreds and perhaps even more of ministers of fifties. Something that must have been before the eyes of Chazal, such as in things permitted and others used prohibitions, etc.
I read an article by Rabbi Michael Avraham, in which he discusses the question of artificial intelligence as a legal arbiter. He quotes the words of Rabbi Yaakov Ariel and argues with them, and this is what they say:
I recently saw a short article in which Rabbi Yaakov Ariel refers to the legal arbitership by artificial intelligence:
“My lords, first of all, the practical law: artificial intelligence cannot arbiter legal arbiter, but legal arbiter legal arbiter according to common sense. The mind of a computer is not the mind of a human being, even if it is very developed and sophisticated, it is not human thought, legal arbiter If the computer tells you an opinion, if you say the opposite opinion, it may be more correct. Don't trust it. The discussion is new and I haven't seen many results of this thing yet.
“The computer is not authorized to rule on laws, whether it is kosher or not. Rabbi Menashe Klein, author of the book ‘Mishna Hilchot’, says that a Torah scroll that a person has checked and made a mistake is better than a computer that has checked and made no mistake’, concluded Rabbi Ariel.
It is not clear whether he intends to claim that there is a concern that the ruling will be wrong, or to state that in principle a computer has no authority to rule on law. The concern that the ruling is incorrect (i.e. does not conform to human ’common sense’) is almost a matter of fact. This of course depends on the quality of the machine, but today machines are already approaching the qualities that can rule on law. This can be tested by experiment. A machine can be given the authority to rule on halakha in various cases and the results checked by poskim. For example, it can be tested in rabbinical certification tests (I think that machines will be able to pass this type of test quite well. As far as I'm concerned, they don't test creative and original thinking there), and if it passes the tests, then it can be given the authority to rule. Even a human poskim who has passed all the tests can make a mistake. One must be careful here of the phobias I described above.
If he means to say that it has no authority in principle to rule on halakha, I completely disagree with that. Not because I have much respect for machines, nor because I have little respect for human poskim. I argue that there is no such thing as "authority to rule on halakha." The poskim of our time, unlike the Sanhedrin or perhaps the Talmud, rule only because they are experts in the subject. They have no formal authority, and their authority is substantive (see, for example, column 393 on the difference between the two). There is no obligation to obey them, but there is an assumption that their ruling is probably correct and can be trusted. A machine can also reach such a situation, and then there is no reason to let it rule.
The last paragraph that mentions the words of the Mishnah Halachot clearly does not speak of a fear of error. After all, it speaks of a situation in which the computerized examiner was not mistaken and a person was mistaken. In other words, there is an argument there regarding the principled authority of a computer in these areas. In my opinion, this is a truly illusory statement. If a computer really is better at pinpointing the truth than a human examiner (and examining a set of rules is a classic task for a computer, much more so than ruling on halachic law), why not let it do so and trust it? Why is a wrong person better than a right computer? Where does he get this idea of horrors from? It seems like one of the phobias I described above.
As long as the ruling of the law is an attempt to analyze a problem and arrive at the correct halakhic answer, this is a task that can certainly be given to a machine. When we are in a situation where rulings will receive the status of binding authority (for example, creating a Sanhedrin of thinking machines), there may be room for such a discussion.
I believe that Rabbi Avraham is wrong. The situation in which the questioner comes to a rabbi, presents his question to him, and receives from him an answer that permits or prohibits, certainly belongs to the realm of formal authority, and for which it is therefore impossible to appoint a machine to this role.
I will explain while proving it. The Gemara in Tractate Avoda Zara, page 7, states:
Tar: He who is asked of a wise man and is impure – shall not ask a wise man and purify him, of a wise man and prohibit – shall not ask a wise man and permit him; There were two, one that defiles and one that purifies, one that forbids and one that permits, if one of them was greater than his companion in wisdom and stature, he followed him, and if not, he followed the stricter one; R. Yehoshua ben Korcha says: Because of the Torah, he followed the stricter one, because of the scribes, he followed the lenient one. A. R. Yosef: The halakhic dispute is an integral part of the Torah tradition. Divided opinions and differing opinions have always existed side by side, with each scholar in Israel acting according to his own position as long as there has not been a discussion in the Grand Chamber and the majority has ruled in the negative. It is clear that a scholar who, by virtue of his study or understanding of the acceptance of his rabbis, has reached a lenient conclusion is not supposed to change his opinion and conduct because of his stricter companion, even if the latter is greater than him in wisdom and stature.
The baraita refers to a person who is not a scholar, who has no opinion at all on the matter in question, whether due to lack of knowledge or lack of understanding, and he is asked by a scholar, or in our case, the sages. When there is a dispute, he cannot trust the one who permits, because he is aware that there is a forbidding opinion. In such a case, he is presented with detailed guidance that is also disputed, but according to all opinions, in case of a Torah prohibition, he must be strict, while in case of a rabbinical prohibition, a halakhah has been ruled that he can be lenient.
How is it that we do not nevertheless become strict in every halakhic dispute regarding the laws of arbitration on Shabbat? How is it that we accept lenient opinions as halakhah?
The answer is supposed to be based on emphasizing the difference between a questioner asking two scholars, and a questioner asking one scholar. The idea is that the Giver of the Torah also knew that not everyone would be able to interpret every law independently in a way that would allow him to ignore other and different opinions, and therefore there is the mechanism of asking the sage, when the latter's teaching becomes, as it were, the independent opinion of the questioner, to the point of allowing reliance on this opinion even to the extent of contradicting other stricter opinions.
When in a certain area they accepted the decision of the Rem, he became their rabbi. Therefore, even if the majority of the people have no ability to decide their disputes with the author, they are still permitted to leniently rely on his opinion even in Torah law.
In the words of Chazal we find that there is a concept of “Mara da'tra”, which in its literal translation means “Lord of the Place”. Indeed, in the place of R’ Eliezer would cut wood to make charcoal and make iron on Shabbat for the purpose of circumcision (Shabbat, page 111, 1) despite the strict ruling of his friends, because in his place all the people of the city are subject to him and his opinion, and they are entitled to trust him without fear of other opinions, even in a severe Torah prohibition.
The one who asks the sage accepts this sage as his rabbi, at least regarding this question, and therefore he can trust his lenient answer when he does not have to fear that there are other opinions that prohibit it. This is in contrast to the one who asks two sages, whose answers are equally valid for him, and therefore a way must be found to decide.
Of course, in order for us to give validity to the sage's instruction, it must be an independent and coherent opinion. If the teacher is not bound by the commandments and does not have the halachic competence to instruct even himself, the questioner cannot trust such instruction.
Artificial intelligence can bring all the data to the questioner. If the questioner is very intelligent, and can decide for himself from the tangle of methods and opinions which is correct for him, then it serves as a large-scale encyclopedia that lays out all the information before the questioner, when he is the one who decides for himself. In such a case, it is clear that Rabbi Avraham is right and there is no reason to use its services to the extent that its reliability problems and similar ones are overcome in the future.
But in the common case, where the questioner is not smart and does not have the slightest understanding of the issues related to his question, presenting all the opinions to him will of course not help him. And even if the machine imitates the calculations of a teacher in Israel, and issues him a clear instruction to permit based on the information before him, the questioner will not be able to rely on this instruction, contrary to other prohibitive opinions, because there is no instruction here. In order for the questioner to be able to leniently apply where there is a prohibition, the teacher's ruling must be valid for himself as well, and once this ruling has a teaching, the questioner can adopt it as if it were his independent opinion.
I think that is what Rabbi Ariel was referring to in his words. Halacha rulings are made by the opinion of a sage who believes as one side in the discussion, to the point of relying on his position as the final word. Therefore, even his questioners, who are like his students, can rely on this teaching. Since there is no opinion of a sage here, but rather an algorithm that uses the same calculations as the decisive sage, and therefore chooses to leniently apply the opinion of the forbidders, the user who receives the information is not entitled to rely on the lenient ruling because there are stricter ones.
The teaching cited in the name of Rabbi Menashe Klein is indeed puzzling in its current form. According to the Mishnah Hilchot memoirs, he strongly opposed the automation of proofreading in the Sethme because of various concerns, some of which are understandable, some of which are not so understandable. But reality proves that even today we do not rely only on computerized proofreading and also need human proofreading. This can be explained by the software's mistakes, but this reasoning may disappear as this field develops.
But the truth is, our reliance on human proofreading in order to determine that a book is kosher is rooted in the fact that a person can rely on his own independent examination of himself. After he determines that from his point of view the book is kosher, even his friend can rely on this determination even without a renewed examination. And this is the law of one faithful witness in prohibitions. A gentile whose book's kosherity is irrelevant to him, even if we neutralize the problem of reliability, we cannot rely on his determination regarding the book's kosherity, because we need a halachic determination and not just data, and such a determination can only be issued by a kosher Jew. Indeed, this determination, even if it is erroneous, is valid – as long as the error was not known – More than a computerized determination that has no validity, even if it happens to be correct.
But I would not rush to adopt Rabbi Menashe's teaching. Because all we need proofreading is because of the doubt in which we are, whether the scribe made a mistake, as is the custom of humans. If the computer is developed without any errors to the point of assuming absolute reliability, we can get out of doubt by proofreading the computer, and then we will not need the human halachic determination.
Hello.
Completely absurd. Everything he brings up there is irrelevant to his own discussion.
The discussion about a wise man who forbade is about the honor of the wise man who instructed, and I assume, like him, that there is no obligation to respect software. So what? It just means that if it instructs me, I can turn to a wise man or another piece of software and ask again. Does that mean that it cannot instruct halakhah, or that it cannot be trusted? What's the connection?
His argument is nothing more than a simple assumption of the desired outcome: he assumes that the instruction of artificial intelligence cannot contradict another stricter opinion. But where does that come from? Where did he get this thesis from? Not from the source he cited nor from the discussion he conducted following it. What's more, artificial intelligence usually instructs the questioner like the opinion of a human wise man who is being lenient, after all, there are many wise men and many opinions, and if it is trained correctly, it will answer as an accepted halakhic opinion. In short, I see no relevant argument here.
https://news.walla.co.il/item/3655511