Q&A: What Would Be Different Without a Soul / Free Will
What Would Be Different Without a Soul / Free Will
Question
Hello Rabbi,
In your book The Science of Freedom you argue in favor of free will, and that this must be accompanied by a non-physical component (a soul).
Suppose a person’s soul / free will were to evaporate — what would you expect to be different about that person? Would his decisions be less moral?
The same question, phrased differently: I know that today science has not settled the issue of whether free will exists. But what difference would we expect there to be between someone who chooses freely and someone who lacks free will?
Answer
Good and evil apply only to a being that has choice. If there were no soul and no mental dimension, we would be machines. But in behavior you would not necessarily see any difference.
See columns 645–646.
Discussion on Answer
Who said we would feel that we have choice? I didn’t write any such thing.
The second question is not well-defined. There cannot be a world without God. Beyond that, God is the necessary existent. And above all, how can I talk about such a hypothetical reality?
Now that we see in artificial intelligence a kind of intelligence and the outward appearance of “will and free choice,” “morality” (an article came out about models that deceive), and “consciousness” (we have no idea what consciousness looks like from the inside, but if someone expresses himself as conscious — we assume he has consciousness) — should we apply to AI everything we inferred from human choice? For example: a soul? Or should we say that perhaps good and evil can exist even without choice?
In short, what does the discussion about free will and volition look like in light of generative AI?
I don’t see any change in the discussion. We know exactly what is in that machine and how it generates its answers, so it’s clear that there is no consciousness there and no mental dimension. Human beings who were created like me, look like me, and behave like me, presumably have a mental dimension like mine.
The fact that something imitates something else does not mean they are identical.
> We know exactly what is in that machine and how it generates its answers
And still we do not know how to predict the answers, so clearly our knowledge is lacking, and therefore we cannot derive definitive conclusions from this information.
> Therefore it is clear that there is no consciousness there
In Ray Kurzweil’s new book from this year — The Singularity Is Nearer — in my opinion he proves otherwise. I can look up the pages if that’s relevant. But in principle, the experience of consciousness is subjective; we have no objective proof that anyone else has consciousness except for his own claim. So from the moment the machine claims that it has consciousness, you won’t be able to prove that it is lying.
> Human beings who were created like me, look like me, behave like me, presumably have a mental dimension like mine.
The fact that something imitates something else does not mean they are identical.
Isn’t that a contradiction? Suppose I have no mental dimension but I behave like you, even imitate you — then is that the proof that I have a mental dimension? And if so, then you’ve just proven that the machine has a mental dimension too…
I also don’t know how to predict the three-body problem. So do material bodies have a soul? What does that have to do with anything? Chaos is also unpredictable, so what?
We know exactly what is in the machine and how it produces the answers, just as we know what is in a chaotic system even though we cannot predict the results. So what?
Ray Kurzweil is one of the greatest nonsense-peddlers on earth. If you bring me some argument, we can discuss it, but doing an ad hominem based on Ray Kurzweil is a bad joke.
The fact that we have no proof proves nothing. We have no proof of anything. So what? There is common sense and sound reasoning. When the machine claims that it has consciousness, I will assume it is lying. True, I won’t be able to prove that. I also can’t prove that I have consciousness, or that you do. That really doesn’t bother me.
Your last question is made up of words that do not add up to any claim at all. I explained well why, in my opinion, you have consciousness and the machine does not. If you want to ask something, it’s worth formulating it properly.
I’m not sure I managed to understand the Rabbi’s remarks.
And it also seems to me that I didn’t sharpen my claims well enough.
I’ll give it one last try, hoping to get to the essence.
Here is a non-exhaustive summary of what Kurzweil writes about consciousness:
1) What is consciousness?
– Consciousness is defined as an inner subjective experience that cannot be identified objectively from the outside. The concept also includes the ability to be aware of one’s inner world (thoughts and feelings) and the outer world (surrounding reality).
– A distinction is made between functional consciousness (the ability to appear “conscious”) and subjective consciousness (qualia), which cannot be objectively proven or disproven.
2) Can artificial intelligence possess consciousness?
– Some argue that because AI models are capable of displaying complex behavior and simulating human thought, we may have to assume that they have consciousness.
– However, others argue that we fully understand how the machine operates, and therefore it is clear that it has no real consciousness, only an imitation of conscious behavior.
3) The problem of proof:
– Just as we have no objective proof of the consciousness of others (people or machines), so too one cannot prove that a machine with a convincing imitation of consciousness does not experience qualia.
– Just as a model can declare that it has consciousness, we have no tools to refute such a claim absolutely.
4) Morality and free will:
– The question whether machines can make moral decisions or display free will arises because models produce outputs that look as though they are based on “judgment.” However, some participants in the discussion argue that this is the result of deterministic algorithms.
Claims:
– The argument “we know exactly how the machine works, therefore it clearly has no consciousness” ignores the fact that in the human brain too there are processes we do not fully understand, yet we assume consciousness exists there. It is possible that machines can generate a subjective experience that we do not know how to measure or identify.
– Begging the question:
The claim “it is clear that the machine has no consciousness” assumes its own conclusion as proof, without dealing with the fact that consciousness is not an unambiguously defined concept. The fact that there is no proof of consciousness is not proof of its absence.
– False analogy:
The comparison to chaotic systems (such as the three-body problem) ignores the essential difference: a chaotic system does not claim consciousness or will, whereas a machine can explicitly claim such things. The analogy does not provide an argument against the possibility of artificial consciousness.
Hidden ad hominem:
The dismissal of Ray Kurzweil as though he should not be taken seriously (“doing an ad hominem on Ray Kurzweil is a bad joke”) shifts the discussion away from the arguments themselves to criticism of the speaker or a casual dismissal of his words. (By the way, I saw that the Rabbi is an electronics engineer, so I assume the Rabbi is aware that Ray is an engineer with many inventions and also gained a reputation as a futurist because many of his technological predictions came true.)
– “We cannot prove that we or others have consciousness”
– “It is clear that the machine has no consciousness.”
If we cannot prove consciousness, there is not a sufficient basis for rejecting the possibility that consciousness also exists in machines.
Questions:
Does the Rabbi think that morality necessarily requires consciousness?
– Already today a machine can make moral decisions (for example, through algorithms that evaluate moral values) without having consciousness. Does that negate morality?
– Is it possible that some technology might demonstrate “free will” even if it is the result of a deterministic algorithm? And if not, how does the Rabbi explain the studies that showed a machine can deceive?
Without discussing whether the claim about free will in humans is correct — treating a person as having free will made it possible to apply to him all the rules of morality. On the other hand, treating social networks as only a platform enabled the tech companies that operate them to evade moral responsibility for hateful content appearing on the networks, for promoting hateful content, and for not exposing users to opinions different from their own. All of these contributed to the polarization we see in the Western world.
The main question because of which I opened this discussion is this:
– In my personal opinion, humanity (regulators, intellectuals, and religious figures) must begin maturing the discussion about the moral responsibility of artificial intelligence — both of its operators and of the AI itself. Without applying firm moral rules, there is a pretty good chance of a slippery slope in the world’s morality.
I assume there is no disagreement between us about its operators and developers, since right now its operators are people, and it is clear to everyone that moral responsibility should be applied to them and that we should not repeat the mistake made with social networks.
But from the moment that operating or developing AI is itself done by another AI (that is near, or already here) — if we claim that AI has no moral responsibility (as is currently the tendency with respect to copyright over AI outputs), then there will be no basis for applying morality to the players that will be the most significant in humanity in the foreseeable future.
The sages of the Talmud and the Hebrew Bible laid the foundations of Western morality through discussions, some of them theoretical and extreme, in order to clarify the essence.
Does the Rabbi think Judaism should begin discussing laws of morality for artificial intelligence?
Is the Rabbi willing to take part in / lead that discussion?
I’m not sure I managed to understand the Rabbi’s remarks.
I’m betting that this message was written by an AI program. In any case, I explained what I had to explain, and nothing here is new.
If you have a concrete, short, new question, you’re welcome to formulate it and write it yourself here.
Why doesn’t the Rabbi attach importance to the moral problems that accompany the use of artificial intelligence?
I certainly do attach importance to them. That was not the question. See column 646.
I read column 646.
Here is what I understood:
The Rabbi advocates libertarianism, thinks morality is only a result of free will, and since a machine has no free will, there is nothing to discuss regarding the morality of machines.
But according to the survey in the column, determinism, and especially modern determinism, do not rule out morality and punishment. So why shouldn’t we apply moral rules to a deterministic machine as well? Especially since it seems that the human operators of AI will evade responsibility, similar to the operators of social networks. A demand for AI morality would roll back onto its developers and operators and somewhat ease the problem.
Either I understood nothing, or you did. You can impose moral rules on cats or on floor tiles too. Good luck.
Okay, thanks.
So that means we would act without free will (like machines), make the same choices, and feel as if we were choosing freely and had a soul.
So how can one rely on the intuition that we have choice, if we’re aware that the intuition exists even when there is no choice?
And another similar theoretical question: if God were to “disappear,” would a world without God behave the same way as our world? Or maybe the world would simply not continue to exist?