חדש באתר: NotebookLM עם כל תכני הרב מיכאל אברהם. דומה למיכי בוט.

Q&A: On Artificial Intelligence

Back to list  |  🌐 עברית
This is an English translation (via GPT-5.4). Read the original Hebrew version.

On Artificial Intelligence

Question

Hello Michael, I’m sending you the following claim —
In my view, all the agonizing over artificial intelligence stems first and foremost from the same arrogance and inflated self-regard that our species has toward itself.
It is so hard for us to accept the fact that we are ordinary, like everything else. We have no soul, no spirit, no divine element, and nothing metaphysical.
Rather, the wondrous combinatorial system that resides in our skull provides us with especially useful illusions of a “self” and of continuity, of causality and estimates of harm and benefit, and even of love, sacrifice, and friendship, and many other complex values — all of them geared toward our survival and continued existence.
That same “Chinese room” is exactly like our skull. There is no meaning to that airy “understanding” that philosophers drool over, because it is itself measured by other Chinese rooms.
And to continue John Searle’s analogy — at first, the man really would acquire only a technical skill in converting Chinese symbols without “understanding,” and he would not be able to improvise anything, and would be forced to go back again and again to his English instruction manuals.
But after some time, the English speaker would internalize the logic encoded in the English instructions, slowly crack the puzzle set before him, accumulate information about the relations among the various symbols, identify logical lines, rules, and connections there, and in the end — understand Chinese. Simply because he is a human being, being presented with a human language.
And a computer too, with sufficiently sophisticated software and an ever-growing database, would do the same thing (more slowly and in a more limited way as of today, admittedly).
Today, a great many computer systems would easily pass the Turing test within a limited space of data and areas of knowledge, as Turing himself conceived them. But the Turing test keeps becoming stricter and stricter with the subjects, simply because people hate to think that a computer can replace them in their mental activities.
To me it is completely clear that very soon there will be computers in our lives toward which we will be able to feel emotions, and they will communicate with us in ways that we will interpret as “understanding,” “consciousness,” and so on. It is only a matter of time — and not much time.

Answer

I don’t see anything new here. These are the same misunderstandings that my book points to. The man simply does not understand the meaning of Searle’s example.
What he says addresses the question of input-output, but Searle (and I as well) are referring to what we feel directly within ourselves (and not in others) when we carry out those input-output processes. Does he think that the feeling of understanding within us, which accompanies communication, also exists inside computers? Would a computer that passes the Turing test and handles input-output like a human also feel within itself those same feelings of understanding that we have? If he thinks so — then in my opinion he is simply delusional (or he himself is a Turing-style computer).
 
Beyond that, my book was not meant to prove that we have free will or a soul, but to explain why science has nothing to say on the matter. That is, anyone who thinks all these things exist in us can continue thinking so without fear of the new findings of neuroscience.
——————————————————————————————
Questioner:
If you were to replace so-and-so’s neurons in his brain with artificial neurons one by one — the artificial neurons would respond exactly like the real neurons, maintain the same voltage, and “fire” at the same time — until all the neurons in his brain were artificial neurons. At what stage in that process would that person’s “feeling of understanding that accompanies communication” disappear?
Jonathan Lazerson,
——————————————————————————————
Rabbi:
I have no idea.
See
the Ship of Theseus.
But if you are trying to argue from this that a computer also feels sensations and understanding, I disagree with you. Even a dead person has all those neurons, and still, as I understand it, there are no sensations or understanding in them. As long as there is a soul in the body, you can replace one neuron with another just as in surgery we replace any other organ in our body.

——————————————————————————————
Questioner:
I’m not interested in getting into the philosophical discussion about identity, because that’s not the point. The fact that a dead person has no sensations or understanding is also not relevant here — I wasn’t claiming otherwise.

But if you view a neuron as a biological input-output device (and I view it that way), then in principle one can create an artificial substitute for it. If we replace so-and-so’s neurons one by one with artificial neurons, then we have done nothing that would change his experience — that same person would feel and understand exactly as before, even when all his neurons are artificial (and when he dies, he will die with artificial neurons too, just as a person with an artificial heart can die).

Assuming the source of all our sensations and experiences is the activity of the brain, I argue on that basis that it is possible to create a man-made machine that feels exactly like human beings do, if only by simulating all the neurons of one human brain one-to-one. In other words, the very existence of a human brain that thinks and feels is proof of the possibility that such a machine can also be built.

My role as an engineer and scientist is to understand more deeply how the brain works and creates our experiences, characterize that in a model, and then implement it in reality in the most efficient way possible — which need not necessarily be identical to the way the brain works.

Personally, I believe that the connection between the process of conceptualization-abstraction (for which today we are beginning to create a computational analogy through neural networks) and consciousness (for which today we have no idea how to build a computational analogy) is what also creates experience. Call me delusional, but I am convinced that a computer that can conduct a meaningful heart-to-heart conversation with a human being in such a way that the person cannot guess he is talking to a computer, would be a computer that also experiences and feels.
——————————————————————————————
Rabbi:
I wasn’t talking about the question of identity either. The example of the Ship of Theseus also addresses your question. The fact is that such a neuronal system gives rise to a mental dimension that accompanies it. You asked when it arises (at the seventh neuron, the hundredth, or the millionth), and for that I brought the example of Theseus. I do not know.
The example of the dead person was also meant to answer your question. A dead person has the same neuronal system but lacks the “soul” that operates it (according to my view, of course).
Likewise, the comparison between replacing a neuron and replacing an organ in surgery. I do not see a substantial difference. The brain is the organ through which one thinks, just as the leg is the organ through which one walks.

The question whether assembling a person ex nihilo by copying neuron by neuron from a living person would produce a human duplicate is a question I do not know the answer to. Maybe not, and only neuron replacements in a living, functioning body can work. And maybe it would work, and then that would only mean that every time such a body is created, a soul enters it. Just as when a seed is planted in the ground, vital life enters it (I know that this is an anachronistic way of speaking, and I am using it deliberately).
The Ship of Theseus question regarding a human being is a fascinating one, and I would very much like to see the answer to it. But even if a human being who thinks and feels were created, that would not mean that you are right in the materialist picture you present.

The fact that you believe otherwise is perfectly fine. But belief is not an argument. “The righteous shall live by his faith.”

The process of consciousness has no computational analogy, not because of technological weakness. Consciousness is not a computational process. Only input-output processes are computational, and it is no wonder that those are what people succeed in modeling. And because of this, you think that a sufficiently sophisticated computer will be able to hold a heart-to-heart conversation with a human being. In my view that is delusional, but as I said, “the righteous shall live by his faith.”

Just one note on which you are certainly mistaken. The fact that we may not be able to distinguish between a computer and a human being (the Turing test) does not mean that it would be a computer that experiences and feels. This is a common mistake among artificial intelligence people, who identify a phenomenon with what it expresses (they beat the stick instead of the one holding it). Think carefully: even if you are right that every such system has experiences and sensations, in any case this would not be evidence for your claim.
And if you ask: then how do I know that other people feel and experience things (the other minds problem)? Maybe we are Turing machines? My answer is that I know that inside myself there are experiences and sensations, and therefore I conjecture that this is also true of other people.
——————————————————————————————
Questioner:
I’m trying to make sure I understand your position. The question under discussion is whether we could ever make a computer that experiences and feels analogously to a human being.

You say:
1. Regarding starting from a human body and replacing neurons one by one: “Maybe it would work.”
2. Regarding assembly from scratch that would create an artificial human duplicate: “I don’t know.”
3. Regarding the possibility that a sufficiently sophisticated computer could conduct a heart-to-heart conversation with a human being: “Delusional.”

To my mind, 2 follows directly from 1, and 3 follows directly from 2.

After all, if I could start with a biological body and reach a state in which everything is artificial, there is no obstacle to building the system artificially from the outset. And if it is possible to create a human duplicate made entirely of artificial components, then that duplicate would be able to conduct a heart-to-heart conversation exactly like his human counterpart.

The question asked in the apparent paradox of Theseus, “Is this still Yossi’s brain?” after all its neurons have been replaced, is not really interesting. On the contrary, the reason the philosophical discussion about that question exists at all is that from every practical standpoint one cannot distinguish between “the brain before” and “the brain after.” If Yossi’s biological brain experienced and felt, then the artificial brain that imitated it did too.

The discussion of materialism or the existence of a “soul” is also irrelevant here. If that metaphysical plane exists, there is no reason why an artificial brain could not also have access to it.
“And if you ask: then how do I know that other people feel and experience things (the other minds problem)? My answer is that I know that inside myself there are experiences and sensations, and therefore I conjecture that this is also true of other people.”

I agree with you, and I want to continue and ask: and how do you know that the “other people” are really people?
Maybe they are Turing machines that only look like people? Is all that is required for you to “know” that Turing machines feel and experience things that they also look like people? Because that too can be arranged.

That is exactly the logic I used when I said that if I can conduct a heart-to-heart conversation with a computer, I am convinced that it can also experience and feel. True, I am not certain, but I would conjecture that it can, for exactly the same reason that you conjecture that “this is also true of other people.”
——————————————————————————————
Rabbi:
Hello Jonathan.
First of all, I would not define our discussion as you proposed: whether we could ever make a computer that experiences and feels analogously to a human being.
The reason is that you have to define what a computer is. If assembling a human being neuron by neuron causes a soul to enter it, then I too am willing to accept that we can do this. But that would not be a computer; it would be a human being. After all, when you take sperm and fertilize an egg, that is exactly what you are doing, and no one disputes that the result experiences and feels like a human being (because it is a human being).
At this point I am not sure, but as I understood your position, you are claiming that these functions require nothing but neurons and biology — that is, you side with materialism (the view that we have no soul, no additional substance distinct from matter) — and that is what I was disputing. Perhaps I misunderstood you.

Now let us return to the summary you proposed of my position: 1. Regarding starting from a human body and replacing neurons one by one: “Maybe it would work.” 2. Regarding assembly from scratch that would create an artificial human duplicate: “I don’t know.” 3. Regarding the possibility that a sufficiently sophisticated computer could conduct a heart-to-heart conversation with a human being: “Delusional.” Indeed correct. That is exactly my position.

But contrary to what you say, in my opinion 2 does not follow directly from 1 (and I wrote that too), for even an emergent property can depend on a certain size or quantity, and below that it will not emerge. You too agree that our brain has abilities and capacities that brains of other, smaller creatures do not have. So you see that there are capacities that emerge only from some minimal size or power of the brain. Now if you start introducing artificial neurons into an existing brain, they will be absorbed into it and join it. But a brain made entirely of artificial neurons might not function mentally (or maybe it would. I don’t know).
Think about an attempt to assemble a person out of his organs, one by one. Is that not different from replacing organs in a living person one by one? In the second mechanism, certainly the person can remain alive and continue functioning, but in the first I doubt that we have the ability to do that (or perhaps it is only a lack of technological knowledge that will one day pass). In any event, it is clear from here that the conclusion that this “follows directly” from that is hasty.

And again contrary to what you say, 3 does not follow directly from 2. See my opening remark: a computer is a collection of neurons without a soul, whereas the combination of neurons you describe might perhaps also contain a soul.

The apparent paradox of Theseus is not about whether the brain would still be Yossi’s. I explained that I too am not dealing with the question of identity. I brought it as an example for the question of emergence: from when does a soul enter this collection of neurons? After one neuron? Three? A million?

The fact that from every practical standpoint one cannot distinguish this brain from Yossi’s brain is not really important. I explained in the previous message that practical distinguishability is not a criterion (which is why I disagreed with the Turing test). At most it is a technical problem that I may get confused, but it does not teach us about the true nature of the things themselves.
John Searle’s Chinese room example (and see also the example of Mary’s room on Wikipedia; in content it is really identical to Searle’s) is meant to demonstrate exactly this: an input-output machine that looks and acts and functions like a human being, and still is not a human being because it is not accompanied by “understanding” of the input-output processes it performs. You may of course disagree with Searle, but it seems to me that you cannot say that this “follows directly” from that. There is no necessary implication here, only begging the question.
Even a computer with today’s neural-network software configuration can conduct an intelligent conversation at some level with human beings who will not distinguish it from a human. Do you think that already today it has experiences and sensations? And if it can be distinguished from a human, then compare it to a monkey or a mouse. Do you think it has sensations like a mouse?
You may reject this and say that in your opinion it depends on its biological form and not on logical functioning (and a computer is iron and not biology). But if so, then once again you have arrived at the conclusion that input-output is not what necessarily determines it, because in terms of input-output it functions like a human/mouse.

And from this it follows that the question of how I know about other people is also not relevant (I wrote that too already). At most this is a diagnostic problem that does not say much about reality itself. I think (although I have an aversion to mysticism) that we have the ability to sense directly the consciousness of the person standing before us and communicating with us. This is not done only through sensory input-output processes (perhaps it is also through the senses, but it cannot be reduced to mere sight or hearing. Like “extra-sensory” feelings when a person senses that someone is standing behind him, or that someone is looking at him, or that someone close to him has died in a distant place. In all this I am only raising possibilities. I too am not sure that these sensations are not illusions).
After all, you too agree that we have the ability to infer that the person standing before us has a mind like ours, and I can ask you too: how do you know that? You also do not have direct access to the mental dimension of your interlocutor (even if it is only a product of biology and has nothing to do with another substance, it is clearly not accessible to another person), so how do you yourself infer the existence of another mind? You agree that the input-output you conduct with him only shows that he is a computer, and gives you no indication whatsoever of the existence of consciousness or any mental dimension at all. You infer it from yourself. So why can I not infer from myself that this is true of human beings but not of computers? It is only a question of the scope of the induction you make, and in fact I am more conservative than you (less speculative than you) in terms of the scope of the group to which I apply my inductions.

This of course does not mean that I cannot be mistaken in my diagnosis — both in diagnosing another mind where there is none, and in thinking there is none where there is one. Like any other inference I make, I am certainly skeptical and take into account the possibility that I was wrong. Still, to claim against me that I am necessarily wrong, or inconsistent, seems to me very hasty and unfounded.

If you will allow me to remark, it seems to me that this is a common failure among artificial intelligence people, since their business is creating models that imitate human behavior (intelligence), and then it is only natural to identify the behavior and the bearer of the behavior with the model. I come from physics, and from there I know how important it is to be careful not to forget that we are dealing with a model and not with reality itself.
——————————————————————————————
Questioner:
Hello Michi,

You asked how a computer is defined. I define a machine or computer (for the purposes of this discussion, it is the same concept) as an entity created “ready for operation” by human beings and composed of physical input-output components (sensors, processor) and virtual ones (software).
You say: “If assembling a human being neuron by neuron causes a soul to enter it, then I too am willing to accept that we can do this. But that would not be a computer; it would be a human being.” And also: “A computer is a collection of neurons without a soul, whereas the combination of neurons you describe might perhaps also contain a soul.” [It seems strange to me that your definition of a computer contains within it the claim that it has no soul; I have never heard such a definition.]

Given my success last time, let’s see whether I understood your position correctly. You are saying:
1. By definition, a machine has no soul.
2. A soul is required in order to experience and feel.
3. It follows that a machine cannot experience and feel.
4. It follows that if we succeed in building a machine into which a soul enters, at that very moment it stops being a machine and starts being a human being.
But the discussion began from your claim that it is delusional that a sufficiently sophisticated computer could experience and feel. Did you mean that it is delusional that a sufficiently sophisticated computer would have a soul enter it? Because that I cannot understand.
You tried to explain this by means of “the question of emergence” (“From when does a soul enter this collection of neurons? After one neuron? Three? A million?”), and I still do not understand how that is connected. Not because I do not believe in the existence of a soul, but because whatever degree of complexity “sprouts” a soul, there is no obstacle to our being able to reach that degree of complexity artificially as well; and whatever metaphysical dimension human experience draws from, there is no theoretical reason why a machine could not also draw from it.
After all, here you accept the possibility that we can create a human being ex nihilo. Of course, as an engineer/scientist, I do not care if you say that the machine I built is actually a human being, and I assume you would agree that this human/machine could conduct a real conversation, experience, and feel. So where exactly does the “delusional” enter? Assembling a human being from biological neurons one by one is a possibility you accept, but doing the same thing from artificial neurons becomes delusional? And if one can make a human being from artificial neurons, why can one not assemble a human being by simulating those same neurons in software? The complexity would be the same complexity. At what point does it start becoming “delusional”?
In addition, there is the philosophical discussion about “the true nature of things,” in those cases where the true nature of things is a concept to which I have no access and has no practical significance whatsoever. To sharpen the point: we are discussing a hypothetical state in which I have biological Yossi in one room, and Yossi built from scratch out of artificial components in the second room, and at every resolution at which you examine the computation they perform there is no difference between them — their neurons fire in the same way at the same time in response to the same input, and they give the same answers to the same questions. And you would still say that biological Yossi experiences and feels, while artificial Yossi does not experience and feel (because he has no soul): “The fact that from every practical standpoint one cannot distinguish this brain from Yossi’s brain is not really important… At most it is a technical problem that I may get confused, but it does not teach us about the true nature of the things themselves.”
“After all, you too agree that we have the ability to infer that the person standing before us has a mind like ours, and I can ask you too: how do you know that? You also do not have direct access to the mental dimension of your interlocutor (even if it is only a product of biology and has nothing to do with another substance, it is clearly not accessible to another person), so how do you yourself infer the existence of another mind? You agree that the input-output you conduct with him only shows that he is a computer, and gives you no indication whatsoever of the existence of consciousness or any mental dimension at all. You infer it from yourself. So why can I not infer from myself that this is true of human beings but not of computers? It is only a question of the scope of the induction you make, and in fact I am more conservative than you (less speculative than you) in terms of the scope of the group to which I apply my inductions.”
My worldview is pragmatist. I hold that if there are two concepts between which there is no practical way to distinguish, then it is a waste of time to debate the difference between them (which must necessarily exist only on the metaphysical plane), and therefore one should treat them as though they are the same thing. I follow this claim all the way through, whereas you stop arbitrarily at human beings. That is why I think you are inconsistent.
As for “extra-sensory” sensations, I do not want to address them because I do not believe they exist.
I have never understood Searle’s Chinese room parable. At the high level of abstraction, human beings think, feel, and understand, but when you zoom in to the level of neurons, we are all ant colonies running around back and forth in Searle’s room. There is no contradiction.
——————————————————————————————
Rabbi:
Hello Jonathan.

I defined a computer as a creature without a soul, because otherwise it is impossible to distinguish it from a human being, and then I cannot make my claim — and you too, who identify the two, turn your claim into a mere definition (instead of an argument).

You propose the following definition: an entity created “ready for operation” by human beings and composed of physical input-output components (sensors, processor) and virtual ones (software).
But although this may sound like excessive nitpicking, one should note that several terms here themselves require definition: what does “ready for operation” mean? What does “created by human beings” mean? After all, fertilizing an egg with sperm is an action done by human beings (sometimes in a womb and sometimes outside it), and therefore what is created is an entity created by human beings. That entity is also entirely ready for operation (apart from the need to feed it, just like cleaning and lubrication or programming of your machine, all of which are also done by a human being, of course). The “software” gets updated (as the baby develops and learns), and likewise the hardware changes its dimensions (the baby grows), just like any self-respecting computer into which you install software and add hardware. Therefore your definition too smuggles in through the back door an identity between man and machine by definition itself (and not as an argument), and therefore suffers from the same problems I noted above.

Your description of my position again seems correct to me. And so does your conclusion: I really am claiming that it is delusional that a sufficiently sophisticated computer (made of iron) would have a soul enter it, unlike the “machine” that is created from sperm and egg or from a precise imitation of them synthesized in a laboratory (there I said I am not sure what would happen — whether a soul would enter or not).
The “delusional” applies to a machine made of iron, not to something made by human beings, since, as noted, sperm and egg are also made by human beings.

It is not clear to me what you mean by “artificial neurons.” Biology created by our own hands, or pieces of metal that simulate the input-output of a neuron? Likewise regarding the expression “fire” — do you mean a biological action, except that what brings it about was created by us, or an electronic action that simulates the first? The first case I already said I am willing to accept as a possibility (I do not know), and only regarding the second did I claim that it is delusional, because that is not a human being but a model of a human being’s input-output (the stick and the dog). And as for “fire,” a biological neuron created by us is still ordinary biology, and there is not necessarily any difference between it and a regular neuron.

I repeat that a discussion about the true nature of things does have meaning in my opinion, even if not “practical” in your sense (that is, not implementable by a Turing machine that receives some input from some creature and determines whether that creature is a human being or not). That is exactly what I am claiming: one must distinguish between the inability to make a diagnosis (whether this is a human being or not) algorithmically, and the absence of any distinction between a human being and a machine.

As I keep saying, the whole disagreement between us lies in the last paragraph of your message. You are a positivist, and I oppose positivism. As I keep saying, for me diagnosis is not identical with the distinction itself. Two things that I cannot distinguish can still be different from one another. And certainly so if I can distinguish them, only not in input-output terms but by direct sensation (as I wrote, this is not necessarily “extra-sensory,” but that is not important to me). A dog senses higher frequencies than we do; does that mean that, from our standpoint, that sound source does not exist? Perhaps I cannot detect its existence, but it exists. And if I have some way of sensing that it exists other than hearing (because my ear is not sensitive to those frequencies), then I can also know of its existence. Another example: when your brain is stimulated and you see Grandma Tzila standing before you (or even when you merely remember her), Grandma Tzila is still not standing before you, even though you cannot distinguish between the two states. If you do distinguish between them, it is not necessarily because you see some difference in one detail of the input or another (that is, because the reconstruction is not perfect), but because you have a feeling (“extra-sensory”? not necessarily) that this is a memory and not the real thing. That is the feeling I have when I stand before iron with input-output, as opposed to standing before a human being with the same input-output (the Turing test). Even if there is no difference in input-output, it is still something else.

Your closing sentence sharpens the problem in your view. You understand the Chinese room example very well, just as you described it well in your remarks. You say that “at the high level of abstraction” there is a difference between a state in which the input-output actions of the man in the Chinese room are accompanied by mental processes (understanding and sensation) and a state in which they are performed without those accompanying sensations. But when one zooms in to the level of neurons, one sees no difference. So there you have it: you yourself see that identity at the neuronal level does not imply identity at the higher level of abstraction. In other words, at the higher level there is something more beyond the input-output of the neurons.
Or do you deny the very possibility of a state in which a person sits in the Chinese room and performs input-output actions like a Chinese speaker, but there is no “understanding” within him? That is, if he performs such actions, does it necessarily follow that he also feels understanding? If so, then to the best of my judgment we have once again returned to the territory of the “delusional.”
By the way, on your view it is not clear to me what it means for a person to “feel understanding.” After all, you only see neurons operating in him, and on your view what cannot be diagnosed empirically does not exist. So in your opinion does the mental — as distinct from input-output operations — exist or not? We are back to the question of other minds, which you did not answer for your own view.

To sharpen things, I will add an example regarding another aspect of humanity (= intelligence) that seems to me relevant to this discussion as well.
In my opinion, artificial intelligence people who speak about the intelligence of a bird (for example, its ability to navigate) or of a computer (its ability to carry out complicated calculations) are mistaken. This is because inanimate things and deterministic systems do not have intelligence (only computational ability). On their view (your view?) water too has intelligence, because it performs computations far more complex than those of the bird (the Navier-Stokes equations). And so too does the electron, which is constantly “solving” the Schrödinger equation.
Intelligence is grounded in judgment and in exercising choice of one technique or option among several in a non-deterministic way. When it is deterministic, intelligence has no meaning at all. I assume that on your view water and the electron also have intelligence, like a computer and like a human being. There is no real difference between what they do and what the human-machine in your definition does. So there is no need to wait for the emergence of what you call artificial intelligence. In principle one could fall in love with water (though conversing with it is a bit difficult, but with a suitable interface that too could be arranged).

השאר תגובה

Back to top button