New on the site: Michi-bot. An intelligent assistant based on the writings of Rabbi Michael Avraham.

What would be different without a soul/free choice?

שו”תCategory: Torah and ScienceWhat would be different without a soul/free choice?
asked 1 year ago

Hello Rabbi
In your book The Science of Freedom, you argue for free choice and that it must be accompanied by a non-physical component (soul).
Suppose the soul/free will were to evaporate from a person, what would you expect would be different about that person? Would their decisions be less moral?
Same question, in a different way. I know that science today does not decide whether there is free will. But what difference would we expect there to be between a free chooser and one without free will?


Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

Leave a Reply

0 Answers
מיכי Staff answered 1 year ago
Good and evil belong only to those who have a choice. If there is no soul and no mental dimension, we will be machines. But in behavior you will not necessarily see a difference. See columns 645-6.

Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

ש replied 1 year ago

Okay, thanks.
That means we would act without free will (like machines), make those choices, and feel as if we choose freely and have a soul.
So how can we rely on the intuition that we have a choice, if we are aware that the intuition exists even if there is no choice?

And another similar theoretical question. If God were to “disappear”. Would the world without God behave the same as our world? Or maybe the world would not continue to exist?

מיכי Staff replied 1 year ago

Who said we would feel we had a choice? I didn't write anything like that.
The second question is undefined. There can be no world without God. Beyond that, God is the necessity of reality. And above all, how can I talk about such a hypothetical reality?

עידו replied 10 months ago

Now that we see in artificial intelligence, a kind of intelligence and the external appearance of “will and free choice” “morality” (an article came out about deceptive models) and ”consciousness” (we have no idea what consciousness looks like from the inside but if someone expresses themselves as having consciousness – we assume they have consciousness) – Should we apply to artificial intelligence everything we have deduced from human choice? (For example: soul) Or should we say that good and evil are possible even without choice?

In short, what does the discussion about free choice and will look like given that artificial intelligence creates Generative AI?

מיכי Staff replied 10 months ago

I don't see any change in the discussion. We know exactly what's in that machine and how it produces its answers, so it's clear that there's no consciousness or mental dimension there. Humans created like me, look like me, behave like me, probably have a mental dimension like me.
The fact that something imitates something else doesn't mean they're the same.

עידו replied 10 months ago

> We know exactly what is in that machine and how it generates its answers
And we still don't know how to predict the answers, so it is clear that our knowledge is lacking and therefore we cannot draw decisive conclusions from this information.

> Therefore it is clear that there is no consciousness there
In Ray Kurzweil's new book from this year – The Singularity is Nearer I think he proves otherwise, I can look up the pages if it is relevant. But in principle the experience of consciousness is subjective we have no objective proof that there is consciousness other than his claim – and therefore once the machine claims to have consciousness – you will not be able to prove that it is lying.

> Humans created like me, look like me, behave like me, probably have a mental dimension like me.
The fact that something imitates something else does not mean that they are identical.
Isn't that a contradiction? Let's say I don't have a mental dimension but I behave like you and even imitate you, so is that proof that I have a mental dimension? And if so, then you just proved that the machine has a mental dimension …

מיכי Staff replied 10 months ago

I also can't predict the three-body problem. So do material bodies have souls? What's the connection? Chaos is also unpredictable, so what?
We know exactly what's in the machine and how it produces the answers, just like we know what's in a chaotic system even though we can't predict the results. So what?
Ray Kurzweil is one of the greatest blunderers on earth. If you bring me any argument, we can discuss it, but making an ad hominem about Ray Kurzweil is a bad joke.
The fact that we don't have proof means nothing. We don't have proof of anything. So what? There is common sense and healthy logic. When a machine claims to have consciousness, I will assume it's lying. True, I won't be able to prove it. I also can't prove that I have consciousness or that you have consciousness. It really doesn't bother me.

Your last question is made up of words that don't connect into a whole argument. I explained very well why I think you have consciousness and the machine doesn't. If you want to ask something, you should formulate it well.

עידו replied 10 months ago

I'm not sure I managed to understand the Rabbi's words.
And it also seems to me that I didn't refine my arguments – well.

I'll give it one last try in the hope of getting to the point.

Below is a non-exhaustive summary of what Kurzweil writes about consciousness

1) What is consciousness?

– Consciousness is defined as an internal subjective experience, which cannot be objectively identified from the outside. The concept also includes the ability to be aware of the internal world (thoughts and emotions) and the external world (environmental reality).
– There is a distinction between functional consciousness (the ability to appear “conscious”) and subjective consciousness (qualia), which cannot be objectively proven or disproved.

2) Can artificial intelligence have consciousness?

– Some argue that because AI models are capable of displaying complex behavior and simulating human thinking, we may have to assume that they have consciousness.
– However, others argue that we fully understand how the machine works, and therefore it is clear that it does not have true consciousness but only an imitation of behavior.

3) The problem of proof:

– Just as we have no objective proof of the consciousness of others (people or machines), so it is not possible to prove that a machine that has a convincing imitation of consciousness does not experience qualia.
– Just as a model can claim consciousness, we have no tools to refute such a claim in a definitive way.

4) Morality and free choice:

– The question of whether machines can make moral decisions or exhibit free choice arises because models produce results that appear to be based on “reasoning”. However, some participants in the discussion claim that this is the result of deterministic algorithms.

Arguments:
– The argument “We know exactly how the machine works and therefore it is clear that it does not have consciousness” ignores the fact that there are also processes in the human brain that we do not fully understand, but we assume that there is consciousness. It is possible that machines can produce a subjective experience that we do not know how to measure or identify.

– Assumption of the request:
The argument “It is clear that the machine does not have consciousness” assumes itself as proof without dealing with the fact that consciousness is a concept that is not unambiguously defined. The fact that there is no proof of consciousness is not proof of its non-existence.

– Analogy Fallacy:
The comparison to chaotic systems (such as the three-body problem) ignores the essential difference: a chaotic system does not claim consciousness or will, while a machine can explicitly claim this. The analogy does not provide an argument against the possibility of artificial consciousness.

Implicit ad hominem:
The dismissal of Ray Kurzweil as if he should not be addressed (“Making ad hominem about Ray Kurzweil is a bad joke”) diverts the discussion from the arguments themselves to criticism of the speaker or a casual reference to his words. (By the way, I saw that the rabbi is an electronics engineer, so I assume that the rabbi is aware that Ray is an engineer with many inventions and also earned a reputation as a futurist because many of his technological predictions have come true)

– “We cannot prove that we or others have consciousness”
– ” It is clear that a machine does not have consciousness.”
If we cannot prove consciousness, there is no sufficient basis for rejecting the possibility that consciousness also exists in machines.

Questions:
Does the Rabbi believe that morality necessarily requires consciousness?
– Already today, a machine can make moral decisions (for example, using algorithms that evaluate moral values) without having consciousness. Does this negate morality?

– Is it possible for a certain technology to demonstrate “free will” even if it is the result of a deterministic algorithm? If not, how does the Rabbi explain the studies that have shown that a machine can deceive.

Without discussing the correctness of the claim about free will in people – Treating a person as having free will made it possible to apply all the rules of morality to him. On the other hand, treating social networks as a platform only has allowed the technology companies that operate them to evade moral responsibility for the hateful content that appears on the networks, for promoting hateful content, and for not exposing users to opinions different from their own. All of these have contributed to the polarization we see in the Western world.

The main question that led me to start the discussion is this:
– In my personal opinion, humanity (regulators, intellectuals, and religious figures) must begin to mature the discussion about the moral responsibility of artificial intelligence. Both of its operators and of the intelligence itself. Without applying strict moral rules – there is a good chance of a slippery slope of morality in the world.
I assume that we are not discussing its operators and developers, since at the moment its operators are people who clearly need to be held morally responsible and not repeat the mistake of social networks.
But once the operation or development of artificial intelligence is done by another artificial intelligence (it is close or already here) – If we claim that intelligence has no moral responsibility (as is currently the trend with respect to copyrights on artificial intelligence products) then there will be no basis for applying morality to the actors who will be most significant in humanity in the foreseeable future.

The sages of the Talmud and the Bible laid the foundations of Western morality, through discussions, some of which are theoretical and radical, in order to clarify the essence.

Does the rabbi think that Judaism should begin to discuss the laws of morality of artificial intelligence?

Is the rabbi willing to take part / lead the discussion?

מיכי Staff replied 10 months ago

I'm not sure I was able to understand the rabbi's words.

I bet this message was written by an AI program. Anyway, I explained what I had to explain and nothing new was added here.
If you have a short, new, concrete question, you are welcome to formulate it and write it yourself here.

עידו replied 10 months ago

Why doesn't the rabbi attach importance to the moral problems that accompany the operation of artificial intelligence?

מיכי Staff replied 10 months ago

I certainly attribute. That was not the question. See column 646.

עידו replied 10 months ago

I read column 646.
Here's what I understood
The rabbi advocates libertarianism, thinks that morality is only a result of free will. And since a machine does not have free will, there is nothing to talk about the morality of machines.

But according to the review in the column, it seems that determinism, and especially modern determinism, does not deny morality and punishment. So why not apply moral rules to a deterministic machine as well? Especially since it seems that the operators of human intelligence will evade responsibility similar to social network companies. A demand for the morality of artificial intelligence will roll back to its developers and companies and alleviate the problem a little.

מיכי Staff replied 10 months ago

Either I didn't understand anything or you did. You can also cast as a moral tool on cats or bats. Good luck.

Leave a Reply

Back to top button