ai’s halachic ruling
Is it not correct to rule out the ruling based on a machine in light of the principle that “it is not in heaven,” which, as I understand it, means that the law was given to humans and therefore their understanding increases even if it is not the objective truth (but people, according to their abilities, think it is)?
Mainly in light of what the Maharal elaborated, that a person must learn (as he should) and rule according to his honest opinion, and he doesn’t care if he doesn’t aim for the truth because that’s what his eyes see. And according to what I’ve read, Rabbi Mahra holds to this. In my opinion, this is also based on the perception that since the halakha was given to humans, it must be determined according to human understanding (and I don’t mean the attempt to claim that it should be played with according to human needs, as some claim without even saying so).
I’m not talking about asking questions about topics you’re not familiar with, which is more of a show of place for rulings that have been handed down. I’m talking about rulings in reality that didn’t exist, and especially on topics that haven’t been discussed (assuming the machine will argue on this topic in the future as well)
The claim is that the machine rules what a person would rule. It imitates his judgment.
I didn't understand. If there is value in autonomy and I have to rule according to my human judgment, why would I reject my opinion for the machine's judgment?
When I rely on another to rule, it is possible to argue. But if the royal way is to rule according to personal judgment, what makes a machine more knowledgeable than a human?
Even in ruling for others, and will we consider the opinion of a machine as one of the opinions of a court? Of course, we will rule according to the majority of opinions from a quorum consisting of humans. So the Sanhedrin will probably be like that too.
Who told you to cancel your mind?
Leave a Reply
Please login or Register to submit your answer