New on the site: Michi-bot. An intelligent assistant based on the writings of Rabbi Michael Avraham.

About artificial intelligence

שו”תCategory: philosophyAbout artificial intelligence
asked 9 years ago

Hello Michael, I am sending you a claim below –
In my opinion, all the debate about artificial intelligence stems first and foremost from the same arrogance and excessive self-esteem that our species harbors for themselves.
It’s so hard for us to accept the fact that we are ordinary, like everyone else. We have no soul, no spirit, no God, and nothing metaphysical.
But no — the wonderful and wonderful combinatorial system that resides in our skulls provides us with particularly useful illusions of “I” and continuity, of causality and the assessment of harm and benefit, and even of love, sacrifice, and friendship, and so many other complex values ​​- all of which are aimed at our survival and existence.
That “Chinese room” is just like our skull. That abstract “understanding” that philosophers lick at has no meaning, because it is itself measured by other Chinese rooms.
And to continue George Searle’s analogy — initially the person will truly acquire mere skill in converting Chinese characters without “understanding”, and will not be able to improvise anything, and will have to return to his English manuals again and again.
However, after a while, the English speaker will assimilate the logic encoded in the English instructions, slowly crack the crossword puzzle presented to him, accumulate information about the connections between the various symbols, locate logical lines, rules, and connections there, and ultimately — understand Chinese. Simply because he is a human being, presented with a human language.
And a computer, with sufficiently sophisticated software, and with a growing database, will do (slower and in a more limited way as of today) the same thing.
Today, many computer systems would easily pass the Turing Test within a limited space of data and knowledge areas, as Turing himself envisioned them. But the Turing Test tends to get harder and harder with test takers, simply because people hate to think that a computer could replace them in their mental operations.
It is quite clear to me that soon we will have computers in our lives that we can feel emotions towards, and they will communicate with us in ways that we interpret as “understanding,” “consciousness,” etc. It is a matter of time — and not a long time.


Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

Leave a Reply

0 Answers
מיכי Staff answered 9 years ago
I don’t see anything new here. These are the same misunderstandings that my words in the book point out. The man simply doesn’t understand the meaning of Searle’s example. In his remarks, he refers to the question of input-output, but Searle (and I too) refers to what we feel directly within ourselves (and not in others) when we conduct these input-output processes. Does the feeling of understanding within us that accompanies communication, in his opinion, also exist within computers? Would a computer that passes the Turing test and conducts input-output like a human, feel within itself the same feelings of understanding as we do? If he thinks so – then in my opinion he is simply delusional (or is himself a Turing-style computer). Beyond that, my book does not seek to prove that we have free will or a soul, but rather to explain why science has nothing to say about the matter. That is, anyone who thinks that we have all of these can continue to think so without fear of the new findings of neuroscience. —————————————————————————————— Asks: If you were to replace the neurons in someone’s brain with artificial neurons one by one — the artificial neurons would respond exactly like real neurons, maintaining the same voltage and “firing” at the same time — until all the neurons in their brain were artificial neurons. At what point in this process would that person’s “sense of understanding that accompanies communication” disappear? Jonathan Lazarson, —————————————————————————————— Rabbi: I have no idea.
By the
ship of Theseus.
But if you try to argue from this that a computer also feels sensations and understandings, I disagree with you. Even a dead person has all these neurons, and still, as far as I understand, they have neither sensations nor understandings. As long as there is a soul in the body, you can replace a neuron with another neuron just like in surgery that replaces any other organ in our body.
—————————————————————————————— Asks: I’m not interested in getting into the philosophical debate about identity, because that’s not the point. The fact that a dead person has no feelings or understandings is also irrelevant here – I didn’t claim otherwise.

But if you see a neuron as a biological input-output device (and I do), then it is theoretically possible to create an artificial replacement for it. If we replace someone’s neurons one by one with artificial neurons, then we haven’t done anything that will change their experience – that person will feel and understand exactly as before, even when all their neurons are artificial (and when they die, they will also die with artificial neurons, just like a person with an artificial heart can die).

Assuming that the source of all our sensations and experiences is in the activity of the brain, I argue that it is possible to create a man-made machine that feels exactly like humans, if only by simulating all the neurons in a human brain one by one. That is to say, the very existence of a human brain that thinks and feels is proof of the existence that such a machine can also be built.

My role as an engineer and scientist is to understand more deeply how the brain works and creates our experiences, characterize this in a model, and then implement it in reality in the most effective way – which will not necessarily be the same as how the brain works.

I personally believe that in the connection between the conceptualization-abstraction process (for which we are now beginning to create a computational analogy using neural networks) and consciousness (for which we have no idea how to build a computational analogy today), experience is also created. Call me delusional, but I am convinced that a computer that can have a meaningful “heart-to-heart” conversation with a person in a way that the person would not be able to guess that he was talking to a computer would be a computer that also experiences and feels.
—————————————————————————————— Rabbi:
I also didn’t talk about the question of identity. The example of the ship of Theseus also speaks to your question. The fact is that such a neuronal system has a mental dimension that accompanies it. You asked when it was created (in the seventh, hundredth, or millionth neuron), and that’s why I brought up the example of Theseus. I don’t know.
The example of the dead also comes to answer your question. The dead have the same neuronal system but lack the “soul” that operates it (in my opinion, of course).
So is the comparison between replacing a neuron and replacing an organ surgically. I see no fundamental difference. The brain is the organ through which we think, just as the leg is the organ through which we walk.

The question of whether assembling a person from nothing by copying a very living neuron will produce a human clone is a question to which I do not know the answer. Maybe not, and only replacing a neuron with a living, beating body can work. Maybe it will work, and then it just means that every time such a body is created, a soul enters it. Just as when a seed is planted in the ground, a vital life enters it (I know this is anachronistic speech, and I use it intentionally).
The question of the ship of Theseus regarding man is a fascinating one, one that I would very much like to see the answer to. But even if a man were created who thinks and feels, that doesn’t mean you are right in the materialistic picture you present.

It’s perfectly fine that you believe differently. But faith is not an argument. The righteous will live by his faith.

The process of consciousness has no computational analogy, not because of technological weakness. Consciousness is not a computational process. Only input-output processes are computational, and it is no wonder that models are made for this. And that is why you believe that a sufficiently sophisticated computer will be able to have a heart-to-heart conversation with a person. To me, this is delusional, but as I said, the righteous in their faith will live.

Just a comment about which you must have been wrong. The fact that we cannot distinguish between a computer and a person (Turing test) does not mean that it will be a computer that experiences and feels. This is a common mistake of artificial intelligence people who identify the phenomenon with what it expresses (hitting the stick instead of the person holding it). And look carefully, even if you are right that every such system experiences and feels, in any case this would not be evidence of your words.
Lest you ask, so how do I know that other people feel and experience (other minds problem), maybe we are Turing machines? My answer is that I know that inside me there are experiences and feelings, and therefore I assume that it is the same for other people.
—————————————————————————————— Asks: I’m trying to make sure I understand your point. The question up for debate is whether we can ever make a computer that experiences and senses in an analogous way to a human.

You say:
1. Regarding starting with a human body and replacing neurons one by one: “Maybe it will work.”
2. Regarding the possibility of creating an artificial human clone out of nothing: “I don’t know.”
3. Regarding the possibility that a sufficiently sophisticated computer could have a heart-to-heart conversation with a human: “Illusionary.”

To me, 2 follows directly from 1, while 3 follows directly from 2.

After all, if I could start with a biological body and get to a state where everything is artificial, there’s no reason why I shouldn’t build the system artificially in advance. And if it’s possible to create a human clone made entirely of artificial components, then that clone could have a heart-to-heart conversation just like its human counterpart.

The question posed in the apparent paradox of Theseus, “Is it still Yossi’s brain” after all the neurons have been replaced, is not really interesting. On the contrary, the reason the philosophical discussion about this question exists at all is because for all practical purposes it is impossible to distinguish between the “brain before” and the “brain after.” If Yossi’s biological brain experienced and felt, then so did the artificial brain that emulated it.

The discussion of materialism or the existence of a “soul” is also irrelevant here. If this metaphysical plane exists, there is no reason why an artificial brain would not have access to it.
“You may ask, so how do I know that other people feel and experience (other minds problem), maybe we are Turing machines? My answer is that I know that inside me there are experiences and feelings, and therefore I assume that it is the same for other people.”

I agree with you, and want to continue asking: And how do you know that the “other people” are really
People? Maybe they’re Turing machines that just look like people? Is all it takes for you to “know” that Turing machines feel and experience is for them to also look like people? Because that too can be arranged.

This is exactly the logic I used when I said that if I could have a heart-to-heart conversation with a computer, I am convinced that it would be able to experience and feel. True, I am not sure, but I would assume so, for exactly the same reason that you maintain that “it is the same for other people.”
—————————————————————————————— Rabbi: Hello Jonathan.
First, I wouldn’t frame our discussion as you suggested: Can we ever make a computer that experiences and senses analogously to a human.
The reason for this is that you have to define what a computer is. If assembling a person neuron by neuron results in putting a soul into it, then I am also willing to accept that we can do that. But it would not be a computer but a person. After all, when you take a sperm and fertilize an egg, that is exactly what you do, and no one disputes that the product experiences and feels like a person (because it is a person).
Now I’m not sure, but as I understood your position, you claim that these functions are not distilled from anything other than neurons and biology, meaning you support materialism (the view that we do not have a soul, an additional substance, different from matter), and that’s what I argued. Maybe I misunderstood what you said.

Now let’s return to the summary you suggested for my position: 1. Regarding starting with a human body and replacing neurons one by one: “Maybe it would work.” 2. Regarding assembling something out of nothing to create an artificial human clone: ​​”I don’t know.” 3. Regarding the possibility that a sufficiently sophisticated computer could have a heart-to-heart conversation with a human: “Imaginary.” Indeed true. That’s exactly my position.

But contrary to what you say, in my opinion, 2 does not directly follow from 1 (and I wrote this too), since even an emergent feature (=emergence) can depend on a certain size or quantity, up to which it will not emerge. You also agree that our brain has abilities and skills that other, smaller brains do not have. After all, you have skills that emerge only from some minimal size or power of the brain. Now, if you start assembling artificial neurons into an existing brain, they will assimilate and join it. But a brain made up entirely of artificial neurons may not function mentally (and maybe it will. I don’t know).
Think about trying to assemble a person from his or her organs, one by one. Isn’t that different from replacing organs in a living person one by one? In the second mechanism, of course, the person could remain alive and continue to function, but in the first, I doubt we have the ability to do so (or maybe just a lack of technological knowledge that will pass). In any case, it is clear from this that the conclusion that it “directly follows” from this is reckless.

Also contrary to what you say, 3 does not follow directly from 2. See my opening remark, that a computer is a collection of neurons without a soul, while the combination of neurons you describe could perhaps also contain a soul.

The apparent paradox of Theseus is not about the question of whether the brain will be Yossi’s. I explained that I am not concerned with the question of identity either. I brought this up as an example of the question of emergentism: When does a soul enter this collection of neurons? After one neuron? Three? A million?

The fact that this brain is indistinguishable from Yossi’s brain from all practical purposes is not really important. I explained in the previous message that practical distinction is not a criterion (which is why I disagreed with the Turing test). At most, it is a technical problem that I will get confused about, but it does not teach us about the true nature of things themselves.
John Searle’s Chinese room example (and also see Mary’s room example on Wikipedia, in software it is exactly the same as Searle’s) comes to demonstrate exactly this: an input-output machine that looks and acts and functions like a person, and yet it is not a person because it is not accompanied by an “understanding” of the input-output processes it performs. You can of course disagree with Searle, but I don’t think you can say that it “follows directly” from it. There is no necessary consequence here, but rather the assumption of the desired.
Even a computer with neural network software in the configuration that exists today can have an intelligent conversation at some level with humans who will not distinguish between it and a human. Do you think it already has experiences and feelings today? And if it can be distinguished from a human, then compare it to a monkey or a mouse. Do you think it has feelings like a mouse?
You can reject it and say that in your opinion it depends on its biological form and not on its logical function (and a computer is iron, not biology). But if so, then again you have come to the conclusion that input-output does not necessarily determine, because in terms of input-output it acts like a human/mouse.

And hence the question of how I know about other people is also irrelevant (I’ve already written about that too). This is at most a diagnostic problem that doesn’t say much about reality itself. I think (although I have a reluctance to mysticism) that we have the ability to sense directly the consciousness of the person in front of us and communicating with us. This is not done only through sensory input-output processes (perhaps this is also through the senses, but it cannot be reduced to simple sight or sound. Like “supersensory” sensations where a person feels that someone is standing behind him or someone is looking at him, or that someone close to him has died in a distant place. In all of this, I’m just raising possibilities. I’m not sure that these sensations aren’t illusions either).
You also agree that we have the ability to conclude that the person standing in front of us has a mind like us, and I can ask you too: How do you know this? After all, you also do not have direct access to the mental dimension of your interlocutor (even if it is only a product of biology and not related to another substance, it is clearly not accessible to another person), so how do you yourself conclude the existence of another mind? You agree that the input-output that you manage with it only shows that it is a computer and that you have no indication from it of the existence of consciousness or a mental dimension at all. You conclude this from yourself. So why can’t I conclude from myself that this is true for humans and not for computers. It is just a question of the scope of the induction you are making, and I am more conservative (less speculative than you) in terms of the scope of the group to which I apply my inductions.

This does not mean, of course, that I cannot be wrong in my diagnosis, both in diagnosing an other mind where it is not, and in thinking that it is not where it is. As with any other conclusion I make, I am certainly skeptical and consider the possibility that I am wrong. And yet, to claim that I am necessarily wrong, or inconsistent, seems to me very reckless and unfounded.

If I may comment, I think this is a common fallacy among artificial intelligence people, since their business is to create models that imitate human behavior (intelligence), and then it is only natural to identify the behavior and the person who has the behavior with the model. I come from physics, and I know from there how important it is to be careful and not forget that we are dealing with a model and not with reality itself.
—————————————————————————————— Asks: Hello Michi,

You asked how a computer is defined. I define a machine or computer (for the purposes of this discussion, the same concept) as an entity created “ready to operate” by humans and consisting of physical input-output components (sensors, processor) and virtual ones (software).
You say: “If assembling a person neuron by neuron results in putting a soul into him, then I am also willing to accept that we can do that. But it would not be a computer but a person.” And also “A computer is a collection of neurons without a soul, while the combination of neurons you describe could perhaps also contain a soul.” [I find it strange that your definition of a computer includes the statement that it has no soul; I have never heard such a definition.]

Given my success last time, let’s see if I understood your position correctly. You say:
1. By definition, a machine has no soul.
2. It takes a soul to experience and feel.
3. It follows that a machine cannot experience and feel.
4. It follows that if we were able to build a machine into which a soul entered, at that moment it ceases to be a machine and begins to be a human being.
But, the discussion started with you claiming that it’s delusional that a sufficiently sophisticated computer could experience and feel. Did you mean that it’s delusional that a sufficiently sophisticated computer would have a soul? Because I can’t understand that.
You tried to explain this using the “emergency question” (“When does a soul enter this collection of neurons? After one neuron? Three? A million?”) and I still don’t understand how it’s connected. Not because I don’t believe in the existence of a soul, but because whatever the degree of complexity that “germinates” a soul, there is no reason why we can’t also achieve this complexity artificially, and whatever the metaphysical dimension from which human experience is drawn, there is no theoretical reason why a machine can’t draw from it as well.
After all, you accept here the possibility that we can create a person from nothing. Of course, as an engineer/scientist, I don’t mind you saying that the machine I built is actually a person, and I assume you’ll agree that this person/machine will be able to have a real conversation, experience and feel. So where does the “hallucinatory” come in here? Assembling a person from biological neurons one by one is a possibility that you accept, but doing the same thing from artificial neurons becomes hallucinatory? And if it is possible to make a person from artificial neurons, why can’t it be possible to assemble a person by simulating those neurons using software? The complexity will be the same. When does it start to be “hallucinatory”?
In addition, there is the philosophical discussion about the “true nature of things,” in those cases where the true nature of things is a concept that I have no access to, and has no practical significance. To sharpen the point, we are discussing a hypothetical situation, in which I have the biological Yossi in one room, and a Yossi built from nothing by artificial components in the other room, and at whatever resolution you look at the calculation they make, there is no difference between them – their neurons fire in the same way at the same time in response to the same input, and they answer the same answers in response to the same questions. And you will still say that the biological Yossi experiences and feels, and the artificial Yossi does not experience and feel (because he does not have a soul): “The fact that in all practical respects it is impossible to distinguish between this brain and Yossi’s brain is not really important… It is at most a technical problem that I will get confused about, but it does not teach about the true nature of things themselves.”
“You also agree that we have the ability to conclude that the person standing in front of us has a mind like us, and I can ask you too: How do you know this? After all, you also do not have direct access to the mental dimension of your interlocutor (even if it is only a product of biology and not related to another substance, it is clearly not accessible to another person), so how do you yourself conclude the existence of another mind? You agree that the input-output that you manage with it only shows that it is a computer and that you have no indication from it of the existence of consciousness or a mental dimension at all. You conclude this from yourself. So why can’t I conclude from myself that this is true for humans and not for computers? It is just a question of the scope of the induction you make, and I am more conservative (less speculative than you) in terms of the scope of the group to which I apply my inductions.”
My worldview is pragmatic. I hold that if there are two concepts that there is no practical way to distinguish between, it would be a waste of time to argue about the difference between them (which will necessarily only exist on the metaphysical plane), and therefore they should be treated as if they were the same. I go with this assertion to the end while you stop arbitrarily at humans. Therefore I think you are inconsistent.
I don’t want to comment on “supersensory” sensations because I don’t believe they exist.
I never understood Searle’s Chinese room parable. At the highest level of abstraction, humans think, feel, and understand, but when you zoom in to the level of neurons, we are all colonies of ants running around in Searle’s room. There is no contradiction.
—————————————————————————————— Rabbi: Hello Jonathan.

I defined a computer as a creature without a soul, because otherwise it cannot be distinguished from a person, and then I cannot make my claim, and you, who identify between them, also turn your claim into a mere definition (instead of a claim).

You offer the following definition: an entity created “ready to operate” by humans and consisting of physical (sensors, processor) and virtual (software) input-output components.
But although this seems excessive nitpicking, it should be noted that several concepts here themselves require a definition: What does “ready to operate” mean? What does “created by humans” mean? After all, the fertilization of an egg by sperm is an action performed by humans (sometimes in the womb and sometimes outside), and therefore what is created is an entity created by humans. This entity is also completely ready to operate (except for the need to feed it, such as cleaning and lubricating or programming your machine, all of which are also done by humans, of course). The “software” is updated (as the baby progresses and learns), and so is the hardware (the baby grows), and this is like any computer worthy of the name that has software inserted into it and hardware added to it. Therefore, your definition essentially again introduces through the back door an identity between a person and a machine by definition itself (and not as an argument), and therefore suffers from the same problems that I mentioned above.

Your description of my position again seems correct to me. And so does your conclusion: I truly claim that it is delusional that a sufficiently sophisticated computer (made of iron) would have a soul enter it, unlike the “machine” created from a sperm and an egg or from an exact imitation of it synthesized in a laboratory (where I said I was not sure what would happen. Will a soul enter it or not).
The “delusion” is about the machine being made of iron and not made by man, since, as mentioned, sperm and egg are also made by man.

I’m not clear why you call it “artificial neurons.” Biology, the creation of our hands, or iron bars that symbolize the input-output of a neuron? The same goes for the expression “shooters,” do you mean a biological action but the one that generates it was created by us, or an electronic action that symbolizes the first? I already said that I’m willing to accept the first possibility (I don’t know), and only regarding the second did I claim that it’s delusional because it’s not a person but a model for a person’s input-output (the stick and the dog). And as for “shooters,” a biological neuron created by us is normal biology and there’s no necessarily any difference between it and a normal neuron.

I repeat that a discussion of the true nature of things has meaning in my opinion, even if it is not “practical” as you define it (i.e., cannot be realized by a Turing machine that will receive some input from some creature and determine whether this creature is human or not). This is precisely what I am arguing for: that a distinction should be made between the inability to make a diagnosis (whether it is human or not) algorithmically and the lack of distinction between human and machine.

As I keep saying, the entire argument between us is rooted in your last paragraph in this message. You are a positivist and I oppose positivism. As I keep saying, for me, the diagnosis is not the same as the distinction itself. Two things that I cannot distinguish between can still be different from each other. And certainly that is the case if I can distinguish between them, just not in terms of input and output but in direct sensation (as I wrote, it is not necessarily “sensory,” but that is not important to me). A dog senses higher frequencies than we do, is that why, for us, the source of that sound does not exist? Maybe I cannot distinguish its existence, but it does exist. And if I have the ability to sense that it exists not by hearing (because my ear is not sensitive to these frequencies), then I can also know about its existence. Another example, when you stimulate your brain and you see Grandma Tzilla in front of you (or just when you remember her), Grandma Tzilla is still not standing in front of you, even though you cannot distinguish between the two situations. If you do distinguish between them, it’s not necessarily because you see a difference in the input in one particular or another (meaning that its reconstruction is not perfect) but because you have a feeling (“supersensory”? Not necessarily) that it’s a memory and not the real thing. This is the feeling I have when I stand in front of an iron with input-output compared to standing in front of a person with the same input-output (Turing test). Even if there is no difference in input-output, that’s a different thing.

Your concluding sentence highlights the problematic nature of your perception. You understand the Chinese room example well, as you described it well in your words. You say that “at a high level of abstraction” there is a difference between a situation in which the input-output actions of the person in the Chinese room are accompanied by mental processes (understanding and sensing) and a situation in which they are done without these accompanying sensations. But when you zoom in to the level of neurons, you see no difference. After all, you yourself see that identity at the level of neurons does not mean identity at the high level of abstraction. That is, that at the high level there is something more beyond the input-output of the neurons.
Or do you deny the very possibility of a situation in which a person would sit in the Chinese room and perform input-output operations like a Chinese speaker, but would not have the “understanding” within him? In other words, if he performs such operations, then he necessarily also feels understanding? If so, then to the best of my judgment, we have returned again to the realms of the “delusional.”
By the way, according to you, it’s not clear to me what it means for a person to “feel understanding”? After all, you only see neurons working in him, and according to you, what cannot be empirically diagnosed does not exist. So in your opinion, does the mental (as distinguished from input-output operations) exist or not? We’re back to the question of other minds, which you didn’t answer for me according to you.

To clarify, I will add an example regarding another aspect of humanity (=intelligence) that I think is also relevant to the discussion.
In my opinion, artificial intelligence people who talk about the intelligence of a bird (for example, its ability to navigate) or of a computer (its ability to make complicated calculations) are wrong. This is because inanimate objects and deterministic systems do not have intelligence (but the ability to calculate). According to their method (your method?), water also has intelligence because it makes calculations much more complicated than those of a bird (Nevier-Stokes equations). The same goes for an electron that constantly “solves” the Schrödinger equation.
Intelligence is based on judgment and the exercise of choosing a technique or one option among several in a non-deterministic manner. When it is deterministic, intelligence has no meaning. I assume that according to your method, water and electrons also have intelligence, like a computer and like a human. There is no real difference between what they do and what the human-machine, as you define it, does. So there is no need to wait for the formation of what you call artificial intelligence. In principle, it is possible to fall in love with water (although talking to them is a bit difficult, but with the right interface, that can also be arranged).

Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

Leave a Reply

Back to top button