The Philosophical Implications of AI Software (Column 591)
With God’s help
Disclaimer: This post was translated from Hebrew using AI (ChatGPT 5 Thinking), so there may be inaccuracies or nuances lost. If something seems unclear, please refer to the Hebrew original or contact us for clarification.
Why can’t you dress up as a Hasidic Rebbe? Because if you put on
a shiny kapote and ride in a Mercedes flanked by two burly guys
— you really are a Hasidic Rebbe.
(A Lithuanian joke)
In the previous column I discussed whether artificial intelligence (AI) can decide halakhic questions, and I also commented on various phobias toward it. My assumption there was that AI can, or will in the future, function like human beings and rule correctly in halakhah. On that assumption I concluded there is no barrier in principle to entrusting it with halakhic decision-making. In this column I wish to enter a more foundational question: granted all that, can such software be viewed as a human being? My conclusion will be no — and in a certain sense this points in the opposite direction from the previous column. In the next column I will examine the factual question, namely the optimistic premise regarding the capabilities of future AI systems. We shall see that this question is not as simple as many now assume.
The Turing Test
Questions about how to define a person arose already at the dawn of the computer age. Alan Turing, one of the fathers of computer science, pondered from when it would be appropriate to treat a computer as a full-fledged person. His premise was that this is a question determined by capabilities: once a computer has the capabilities of a human, it is essentially human. From here he derived the well-known ‘Turing Test‘ (originally called the “imitation game,” the title of the well-known film about him), intended to diagnose just that.
The test goes like this. We place a person in a room with two screens, one connected to a software system and the other to a person (via keyboard). The person in the room conducts free conversation via both screens with the two interlocutors and tries to determine which is the software and which is the human. If he cannot tell the difference, then this computer has human-level ability and should be regarded as a person.
Church–Turing
To broaden the perspective, let us consider the Church–Turing thesis. To understand it, we need to know what a “Turing machine” is. It is an extremely simple theoretical device that serves as a primitive archetype for a computer. Despite its simplicity, anything a computer can compute a Turing machine can also compute — it may just take a very long time. The Church–Turing thesis[1] in its original form claims that every computable function we know can be carried out by a Turing machine. There is no proof of this thesis (we do not even know how to define “computable function” in the most general sense), but it is widely considered a very reasonable thesis. The main reason is that we currently know of no machine, practical or theoretical, that can compute something a Turing machine cannot. In other words, what cannot be computed by a Turing machine — so far we have not found a different machine that can compute it (otherwise the original identity would not hold).
Many philosophers have seen significant philosophical implications in this thesis (see for example here). They argue that it implies no fundamental difference between the abilities of humans and those of a Turing machine; hence a Turing machine is, in effect, a person. Of course the machine itself is not a person; software is required that can perform all the functions a human can. For this reason the question returns to us with force nowadays, since there is a feeling that we have found such software. Contemporary AI systems (like ChatGPT) are getting very close to that ability. This obliges us to revisit a topic that until recent years was largely hypothetical.
The claim on the table is this: assuming such software exists (the Turing Test is one possible yardstick), the machine would then merit the label “person.” This itself can be interpreted in several ways. Minimally, one could say it is like a human intellect. Those who go further argue it is literally a person and deserves the treatment and rights we give human beings. The minimalist reading is in a sense a tautology: if we indeed arrive at a machine that can compute everything a human can compute, then it is equivalent to a human intellect. The more interesting question is the latter: in what, if anything, is man superior to the machine, at least when their intellectual abilities are equal?
We might discuss whether such a machine would experience sensations, emotions, have desires, etc. These notions circulate quite a bit in AI discourse, but to me they are fanciful. I shall therefore not engage them here, but only the comparison at the intellectual level and its implications. This definition is delicate, for as explained the claim, taken literally, is a tautology: it essentially says that two beings with identical intellectual abilities are identical in their abilities. There is, however, a hidden premise that all our intellectual operations are kinds of “computations,” and that deserves much discussion. Is all creative thinking a computation? If so, then one can always show there exists a wholly mechanical algorithm that carries us to the conclusion. Consider whether the move from empirical data to theory — a process thoroughly non-unique and non-algorithmic — is in fact a complicated mechanical process. Were the discoveries of relativity or quantum theory the product of some computable function? It is possible; in that case our intuition is just a faculty that shortens processes (doing the computation directly and implicitly). But if our intuition is not a shortcut for a computational process but rather contains built-in creativity, then it is not at all clear that future AI software will necessarily be able to do everything flesh-and-blood humans do. It is fairly clear that much of our ability is indeed computational, even things we thought were not, and in fact current systems are increasingly successful at mimicking them. The question is whether all of our abilities are of that sort — i.e., whether in the future these systems will be able to do everything we can. Among AI people the common view is yes, and for now the trend seems to be moving that way, but I am far from certain that is the case.
It is important to understand that one of the most prominent and complex functions that distinguishes humans is the ability to converse (the “speaking spirit”) intelligently. If conversation is a computational capacity, then it is plausible to reduce all our intelligent operations to computations. Hence Turing proposed his test via an everyday conversation experiment. But even if conversation is not a kind of computation — and perhaps especially if it is not — then if a computer can also mimic this capacity of ours, it would arguably deserve to be considered a person. Contemporary systems like ChatGPT are indeed approaching full success on conversational Turing tests, and therefore they challenge our definition of a person far more than before.
In the examples I presented in the previous column, we saw that for now the answers one gets from AI are fairly generic and not very creative. They are of course dependent on the trainer and on the data fed into them. As is known, most of these systems are “left-leaning,” since many of those feeding them in Silicon Valley hold such views (so do not be overly impressed by answers people quote in the name of ChatGPT about judicial reform and current political issues). In general, it is difficult to coax something surprising and original from such software. The text it produces usually reads like an answer a methodical person would write after reviewing the facts in a set, known order: arranging the facts, formulating the question neatly, and concluding that all possibilities exist and one should beware of choosing any one. Deciding among the options requires judgment, creativity, and non-algorithmic thinking. That is hard to get from these systems.
AI plays it safe and usually does not surprise us. Is this just a result of the field’s early stage, and will it look different later? Likely yes — but how much? Will the day come when nothing unique is left for humans to do? I do not know.
In passing, let me again note that AI sometimes invents facts for its own amusement, so in certain respects it cannot be trusted. In the previous column I pointed to a case of a lawyer who relied on ChatGPT and was sued for submitting fabricated precedents and rulings that did not exist in the materials he filed with the court. See also here (from minute 9:28) the creative fabrication of a marvelous aquarium in Kiryat Yam and a new résumé and family for Guy Zohar. Worth watching — it is quite amusing. Let the public see and beware. In the context of halakhic ruling, however, if the system passes the test on many questions, that is an indication one can rely on it not to invent precedents and halakhic answers (insofar as precedents should be relied upon at all). I assume it will be possible, at least in the future, to ensure the fellow does not do such things to us. Our discussion here proceeds on the assumption that it answers to the point and responds properly.
Everything we have seen thus far should be kept in the background as we enter the discussion. Now we must return to the beginning and examine the question more systematically.
What Is Understanding?
It is worth beginning with two well-known parables. The more famous is John Searle’s Chinese Room, mentioned here before (see, for example, Column 35, devoted entirely to our discussion here). Imagine a person who speaks only Hebrew sitting in a booth with two slots. In the booth there is a huge bin full of Chinese characters. Through one slot he is given questions written in Chinese, and he is supposed to answer them in fluent Chinese through the other slot. Every wrong answer earns him an electric shock as negative feedback. Searle’s assumption is that after an infinite amount of time it is possible that the fellow will manage to answer every question he receives with a plausible answer and avoid shocks. In another version, the person in the room has a set of rules instructing him how to answer any question — but these rules do not involve interpreting the words and sentences in the questions; they are purely typographic, formal rules (syntax without semantics). The question is whether in such a case he truly knows Chinese. This is somewhat akin to the question whether the prince in R. Nahman’s tale of the “Turkey Prince” (see Columns 199 and 354) is truly healed. He behaves like a person, but internally he still considers himself a turkey. So too in the Chinese Room: our fellow writes answers in Chinese, but only at the external, behavioral level. Internally he has no clue what the questions mean or what his own answers mean. He behaves like a native speaker — but only outwardly. Inside, he has no understanding of Chinese. It is hard to say of such a person that he is a Chinese speaker.
Searle wished to illustrate the difference between a collection of a thing’s features and the thing itself. There may be a phenomenon or object that has exactly the same outward features as another phenomenon or object, and yet they are not identical. There may be an imitation with the same external features but which cannot be identified with the original it imitates. Note: the claim does not depend on the fact that sometimes the person in the Chinese Room might answer incorrectly. Even if he answers every question correctly, it is still difficult to say he understands Chinese. He knows how to use Chinese in a way that appears to communicate with his environment, but he does not understand Chinese, and therefore he is not truly communicating. One could say this is interaction without communication.
Another example relevant here is Mary’s Room, from Frank Jackson (also mentioned here more than once; see, e.g., Columns 111, 142, 446, 452, 493, and more). Mary is a physicist who spends her entire life working in a black-and-white room. She masters all the intricacies of optics and knows everything there is to know about light waves at different frequencies and their behavior. One day she leaves her room and encounters red light for the first time. Has she learned something new? The answer is of course yes. In the room she knew everything there is to know about optics and waves, but she had no idea what color is. That can only be grasped by direct acquaintance. Here it is not an imitation but a scientific description of reality. Science does not imitate reality but describes certain aspects of it. Yet the scientific description is not the reality itself but a description, and it is wrong to identify knowledge of a field’s scientific findings with unmediated acquaintance with the reality itself.
These two examples teach us that the concept of “understanding” is rather elusive. A person who understands something behaves in ways that indicate such understanding. But behavior is not understanding; at most it expresses understanding. Thus there can be a case where a person or other creature behaves like one who understands, even though it has no understanding. Understanding is not behavior but a cognitive state of which behavior is an expression. Perhaps one can imitate the behavior — but such imitation, even if perfect, does not necessarily express understanding. In the Chinese Room or Mary’s Room, the imitation is external and clearly does not indicate internal understanding.
On the “Intelligence” of Water
In Column 35 I discussed the concept of intelligence. I argued there that in AI it is common to speak of the intelligence of computers and animals, determined by their problem-solving abilities. Birds navigate; bees manage a hive; ants, a colony; and so forth. I contended that these abilities do not reflect intelligence, and that the concept is misapplied. To explain, I suggested we look at water. The flow of water is described by extremely complex equations (Navier–Stokes), which even the greatest mathematicians and physicists cannot solve in very simple cases — yet water has no difficulty with them. It always flows according to the Navier–Stokes equations, unmoved by how hard the environmental conditions make the equations to solve.
We can view water as an “analog computer.” To understand the term, take an electrical inductor as an example. From physics we know that the voltage across an inductor is proportional to the derivative of the current (therefore the current is the integral of the voltage). So if I feed a current whose time dependence is given by a function f(t) into an inductor, the voltage across it will be proportional to the derivative of that function. If I want to differentiate a function, all I need do is feed a current shaped by that function into the inductor and measure the resulting voltage. The inductor is an analog computer (not digital), since it is a machine that differentiates any function we like — and it does so far better than any person can, even for functions that are very difficult to differentiate.
Differentiation is easy here because the derivative is a single-valued operation we can do for any function. Integration (the inverse operation) is far worse. If we apply to an inductor a voltage given by some function g(t), the current that flows will be its integral. That is an operation we cannot perform for every function g. Here the inductor already surpasses us in mathematical ability. Does that mean it is more intelligent than we are?[2]
Water, too, is an analog computer, a machine that “solves” the Navier–Stokes equations. As noted, its “abilities” far exceed ours. And yet it would be quite odd to attribute any intelligence to water — or to an inductor. These “abilities” are not intelligence. Why? Because that is simply the nature of these “machines.” They are not solving problems; they are just behaving according to their nature. The problems (the mathematical equations) and their “solutions” are our way of viewing the situation. Only when we look at the inductor or the water does it become a thinking or computing machine. In itself it merely moves electrons or water molecules in accordance with its nature.
But a computer is also such a machine. It, too, merely moves electrons according to its nature. When we look at it, this becomes problem-solving. Consider a computer “solving” 3+8. We type those digits with a plus sign between them. In parallel the expression appears on the screen. Why does that happen? Because we built a system that produces a visual image corresponding to our keystrokes and renders the picture “3+8” on the screen. If we had built it differently, the same keystrokes might have produced on the screen an image of a waterfall, a flying bird, or a purely abstract design. But it was important to us to ensure that the image would be one we read as an arithmetic exercise “3+8,” so we arranged matters accordingly. Of course the computer has no idea in any sense that it is dealing with a problem or performing an arithmetic calculation (indeed, it has no capacity to “know” anything). It just moves electrons according to the laws of physics that govern it and produces an image according to my program. That image has the meaning of an arithmetic exercise only for me, the one typing and viewing the screen — not for it. Nor is the “calculation” it performs and the arrival at “11” truly a calculation. I merely built inside it systems that move electrons in such a way that if the input is “3+8,” the electrons will move so that the screen displays “11.” The computer does not know what problem arrived; it does not know that it is solving anything; indeed it does not solve. It moves electrons according to a program I built so that the outcome corresponds to the desired result. That too is a kind of analog computing, though nowadays it is typically done digitally. This changes nothing in principle.
Is the situation any different in a computer running AI software? Not at all. It is still a golem moving electrons under a program I wrote — only a very sophisticated one, so much so that even its authors cannot always explain whether it will work and why (people do not fully understand how the software produces its impressive results, yet they know how to build and improve it. As in some sciences, here too we have observational knowledge beyond our explanations). How, then, does anyone claim that AI software possesses intelligence? In what does it differ from water, a bird, an inductor, or an ordinary computer? In my view, in nothing. It is a machine we have managed to make do more complex things, more like us — yet still a machine moving electrons by our instructions. I do not see any essential philosophical change here. None of this says anything about its capabilities or the problems it “solves” now or in the future, nor about the dangers it poses. Those are important technical questions, but I do not see how they alter the essential philosophical situation. A computer is not a person, and even if it “knows” how to “solve” complex problems, we should not ascribe to it intelligence in any substantive sense. It is a picture or imitation of a human being — perhaps an excellent imitation, yet still an imitation. Just as an electromagnetic field is not the color red, and mechanical letter substitution in the Chinese Room is not speaking Chinese.
In the next column I will try to enter a bit into the nature of creative thinking (contrary to what I wrote to Tirgitz), and sketch a map of considerations by which one can think about the chances of arriving at software that succeeds in doing everything humans do in these domains. For now, I shall examine our topic from the angle of Wittgenstein’s philosophy.
ChatGPT and Wittgenstein
A few months ago someone sent me remarks by Prof. Yitzhak Ben-Israel (mentioned in Column 502), who pointed to an article by Dotan Reis. Reis discusses the difference between machine and human regarding thought, creativity, and speech. His starting point is the (later) philosophy of Wittgenstein, in his Philosophical Investigations. Wittgenstein’s main innovation there concerns the relation between meaning and use.
At the beginning, Reis writes the following:
Another aspect that in my view does not get enough attention is the shadow cast by ChatGPT and the like over our conception of ourselves as human beings. Perhaps the success of ChatGPT requires us to update our concepts of what “thought,” “creativity,” and “speech” are. Does ChatGPT think and speak like us, or is it an imitation of the real thing? What does it say about thought if an AI model can think, and what does it say about us if such a close imitation of reality is possible?
He raises two possibilities: that the software imitates us, or that it imitates (?) the real thing. What exactly is the difference? If we built a perfect imitation of our ways of thinking, then there is no need to update our self-conception. It is a structure that imitates us but has no abilities of its own. It does not know how to think; it merely imitates what we would do in such a situation. But if it is an “imitation” of the real thing, that means it itself is thinking, perhaps like us — not because it imitates us, but because it, too, thinks (we are similar because both of us are doing the same action). Of course the term “imitation” is infelicitous in this second sense.
If the software imitates us, that still raises questions about us (if software can imitate us fully, that suggests something mechanical and not very creative in our thinking). And if it imitates the real thing, again questions arise about our creativity — or perhaps that creativity is not unique to humans, which prompts reflections on the nature of creativity in general and especially on its connection to our freedom of thought (see Columns 35 and 175, and also the next column). Now to Wittgenstein.
It is common to think that concepts and statements in language have meaning outside themselves, to which they point. Language is a description of objects, facts, and relations among them, and people assume that the meaning of language is those facts it describes. Moreover, when we say something in language, the assumption is that first we understand it, and only then do we express it (see on this Column 381 on Whorf, and that series in general). Wittgenstein argues that this fails the test of reality. Observing our use of language shows us that this is not really its meaning. Hence he claims that the meaning of a statement is nothing but our way of using it, and one cannot sever the two. I think no one disputes that the meaning of a statement or concept is context-dependent and can change with circumstances; but Wittgenstein claims more: there is no meaning detached from use. Use is meaning. I will not enter into this assertion and the various ways to understand it, because on its face it strikes me as absurd. He assumes that use can develop without understanding, and in the columns noted I have already pointed out the absurdity here.
In light of this philosophical move, Reis raises the following question:
There is no doubt that ChatGPT knows how to use words, and that is what makes it such an intriguing case for examining this view.
Using words is a technical matter, but as we saw above (in the Chinese Room and more broadly), identifying use with understanding is highly problematic. The software “knows how to think” in the same way the person in the Chinese Room “knows” Chinese. Proficiency of use (how often it will err) is irrelevant here. Even with perfect proficiency one cannot equate its state to that of a person who understands Chinese, because there are no cognitive processes occurring behind that use. Note well: it may be that such a machine can do everything we can do (see next column), and still I am unwilling to ascribe to it understanding, judgment, creativity, and the like. It is a perfect imitation of processes that occur in us, but an external imitation. Moreover, it succeeds only because we did it first and the machine imitates us. Would such machines, had they existed in Adam’s time, have independently developed thought and science like human beings? I am very doubtful — but that is next column’s topic.
Consider, for instance, the text of ChatGPT’s answer to Reis’s question (translated by him):
The training process of ChatGPT raises questions about the relation between language and reality. The model can generate coherent, meaningful language based on statistical patterns and relations among words, without necessarily having an understanding of objects in the real world or of the concepts to which language refers. This highlights the idea that the relation between language and reality is complex and not always simple and direct.
Perhaps I am biased, but this text (like the ones in the previous column) is very generic, mechanical, and vague, and does not really display creativity. There is an interesting point about the relation of language to reality, but it was smuggled in by the question. It seems to me about what I would write if I had just woken up and did not want to invest thinking time in the answer.
Reis writes:
In much of his thought, Wittgenstein tries to separate practice from what stands behind it, which is inaccessible. ChatGPT is the ultimate test case for his approach, in that its practice — writing — is very similar to ours, while we know that what stands behind it is not human. If meaning is indeed in use, then there is no difference between a person’s statement and an AIstatement.
But this is precisely the crux. What stands behind the practice is inaccessible to us. It is hard for us to describe the thing-in-itself beyond its verbal description (for any attempt to describe it will itself use words). But the fact that it is inaccessible does not mean it does not exist or is unimportant. It is the most important thing, and verbal description is our best way to approach it. This argument shows not the identity of meaning and use but rather the inaccessibility of meaning and the natural accessibility of use. Therefore this kind of software raises no philosophical questions about what understanding is. It does raise questions about whether understanding can produce outputs that cannot be produced without understanding (by software which by definition understands nothing). Here the answer is not simple, and I shall need it in the next column. But any philosophical conclusion about the essence of understanding and of human speech is, in my view, mistaken.
Reis asks there:
If one can know everything about the world solely from learning semantic relations among words, what is the value of all these experiences we have accumulated?
The mistake is that one cannot “know” anything about the world solely from semantic relations (see the Chinese Room). What one can do is use words correctly in a technical, external manner. That is emphatically not the same thing.
I already noted that any mechanical imitation of a human action can raise the same question. There is therefore nothing special in today’s AI systems regarding the philosophical question of understanding. It is simply a better, more complete imitation. Even in the past, one could ask whether a computer has “understanding” like a mentally challenged child. My answer then, too, was no — because a computer has no understanding. The progress from the “understanding” of a challenged child to that of an intellectual genius does not change the essential distinction.
Later, Reis writes:
It is interesting to see that our process of learning language does not presuppose understanding. When children learn their first words, they do not learn them through an understanding of meaning but through use (Wittgenstein discusses language learning in the Philosophical Investigations, §§1–21). They learn that in certain situations certain things are said. We tell them, “Say ‘Mama.’ Ma-ma.” We ask them to complete sentences: “One, two, three, hands on the—.” We teach them to link a situation to a word: “Say bye-bye.” And it continues for many years. We say “Bon appétit” without reflecting on the “meaning” of the word. So too with “Amen,” “Mazal Tov,” “Good luck.” One can say we teach children to follow rules. They learn statistically what word should come next.
Even if he is right about how a child learns, he ignores the understanding in the background of that learning. The fact that understanding is acquired by training a skill — syntax generating semantics — tells us something about us and how we learn. After practice, understanding surprisingly emerges. But it is not correct, on that basis, to identify understanding with use. That is a common philosophical and statistical error: that A correlates with B or even causes B does not mean they are identical.
My Critique of Wittgenstein
My critique is not of Reis. If one grants Wittgenstein’s philosophy, then Reis is right. The question is whether Wittgenstein’s identification of meaning with use is correct. As explained, in my view there is no doubt it is not. Each of us experiences this immediately. Perhaps one can reach all our conclusions without understanding and only via syntax — but that does not mean that understanding contains nothing beyond use.
I think Wittgenstein’s problem here is very typical of analytic thinking (unsurprisingly; he was an analytic philosopher). He has decent arguments showing why meaning is inaccessible to us and how one can produce thinking’s outputs without understanding, only via use (AI sharpens this point). But the alternative he proposes — identifying understanding and meaning with use — has always seemed problematic to me. Analytic philosophers tend to take a good question and turn the question mark into an exclamation point — to take a difficulty and turn it into a philosophical thesis. They ignore the intuition screaming at us that we are dealing with a difficulty — that the hard-to-grasp view is still correct (i.e., they ignore the difficulty in the view they themselves propose. It may fit the accessible facts, but it is plainly false).
One example I have treated elsewhere concerns defining the difference between prohibitions and positive commandments in the Torah (see my article here). I noted there that this is a typical feature of academic-analytic thinking. See also Column 202 and the series 414 – 418. See also my article here on different approaches to studying Jewish thought. There I cited an article by the late Aharon Shemesh, who points to the difficulty of identifying the differences between lo ta’aseh (prohibitions) and aseh (positive commandments). The initial impression is that the distinction is pragmatic: a positive command tells us to do something (one violates it by inaction and fulfills it by action), while a prohibition tells us not to do something (one violates it by action and fulfills it by refraining). This initial distinction does not withstand the facts. There are positive commandments fulfilled by refraining (Sabbath rest; fasting on Yom Kippur), and prohibitions fulfilled by action (e.g., “Do not stand by your fellow’s blood,” “Do not place blood in your house”). Hence Shemesh concludes that the distinction between prohibition and positive command is linguistic, not pragmatic: the character of a commandment depends on how the Torah phrases it.
That conclusion is of course correct, but it is very odd to think that this phrasing was not itself based on some real distinction. Is it plausible that the Torah arbitrarily chose to phrase some commandments as prohibitions and others as positive commands, irrespective of their content? There must be some substantive difference, of which the phrasing is only an expression. I will not enter here into the answer I proposed (see in detail in our article on the sixth root), but for our purposes the analogy to Wittgenstein’s error is immediate. There, too, he identifies the expression of a thing (use, language) with the thing itself (understanding, meaning) because of the difficulty of understanding the thing. And thus our lack of access to the thing itself is interpreted as if the thing itself does not exist. That is a classic analytic shortcut — and of course a mistake.
In the example Reis brings about rule-following, Wittgenstein’s conclusion is that, because it is difficult to understand how we follow rules, there are in fact no rules. There is only our use of these principles, which appears as if there were rules. Again, I think the same error is at work. Clearly there are rules. Our way of learning them is through use. It is hard for us to describe the rules and how we learn and follow them — but that does not mean they do not exist. Our use of rules or language expresses an understanding, sometimes nonverbal and not fully conceptualized, of them. But at least in us (unlike in AI software), use does not stand alone.
And Another Answer from CharGPT
No discussion like this is complete without the customary exercise: seeing what the software itself says about the question of its own use. In the post cited, Yitzhak Ben-Israel reports that he sent ChatGPT the question, “What would Wittgenstein say about your verbal abilities?” and he brings the translated answer:
As an AI language model, I do not have personal experiences, consciousness, or the ability to hold opinions or beliefs. Therefore, I cannot speculate on what Ludwig Wittgenstein, an influential 20th-century philosopher, would say specifically about my verbal abilities.
However, Wittgenstein was known for his interest in language and its use in communication. He emphasized the importance of language in shaping our understanding of the world and the limits of what can be meaningfully expressed. He argued that language draws its meaning from its use in specific contexts and forms of life.
If Wittgenstein were to evaluate my verbal abilities, he might consider the ways I use language to generate responses based on patterns and examples from the training data provided to me. He might also question whether I can truly understand and grasp the meaning of the words and concepts I use, or whether my responses are merely mechanical and lack genuine understanding.
Ultimately, it would be a matter of philosophical debate and interpretation how Wittgenstein might assess the verbal abilities of an AI language model like me.
Again, you can see that this is a fairly generic text that summarizes the topic but does not truly offer anything imaginative. It is, of course, an impressive achievement for AI software, but far from the answer I would expect from a person with creative thought. I think there are also outright errors here, such as treating as an open question whether it can truly understand and grasp the meaning of the words and concepts it uses, or whether its responses are merely mechanical and lack real understanding. Clearly the latter is correct, and setting out both options is merely a product of what its trainers put into it. There is a debate among AI researchers on this; in my view, those who hold the first position are simply mistaken (they identify use with understanding. I find it hard to believe any of them claims AI software has understanding in the accepted sense). But the software presents both options because that is what it has been trained to do.
I must say there are quite a few people whose answers to such questions would look quite similar (see the beginning of the previous column on AI “vorts” and sermons versus genuine lomdus). Does this mean ChatGPT is a person, or that those people think like machines? I incline to the second option. The future will tell whether AI systems will achieve truly creative thinking and whether there will remain any human function they cannot imitate — or whether a difference will always remain (I will address this in the next column). But even if no difference remains, clearly there is no understanding there.
Therefore, arriving at such a perfect imitation is less impressive than one might think. It may tell us that we feel creative when in fact our thinking is mechanical — but there is still no reason to identify software with a person. Software does not think, has no intelligence, and in fact has no understanding at all. It is an (advanced) imitation of human thinking, but it relates to a real human roughly as a photograph relates to his body. The difference between today’s imitations and those of the past is primarily quantitative (in complexity and sophistication).
This means that with respect to our attitude toward computers with such software and capabilities — or their “rights” — there is no reason whatsoever to change anything compared to the attitude we gave, and still give, to primitive computers to which no one would have dreamed of attributing human status. See this response to the previous column.
One implication concerns the question I was asked following the previous column. I was asked about the value of Torah study in a situation where software can produce learned texts at the level of the greatest scholars or even beyond. I answered that in my view this has no significance, and the duty to study in depth remains, for our aim in study is to cleave to God’s will — not necessarily to generate new answers. Just as the existence of a wondrous scholar does not obviate the study of all other learners. Of course the fact that the machine “studies” (insofar as one can speak of study without understanding) has no religious value. This expresses the fact that this machine, sophisticated as it may be, is not a person.
The last two columns have dealt with our relation to “thinking” machines, even on the assumption (perhaps to be realized in the future) that they can perform everything humans can. In the next column I will try to touch on the factual question whether this will indeed be the case.
[1] Alonzo Church was a very prominent logician and Turing’s doctoral advisor.
[2] See there the discussion whether a single, distinct ability counts as intelligence, or whether varied abilities are required. In my view this does not really matter for our purposes.
Discover more from הרב מיכאל אברהם
Subscribe to get the latest posts sent to your email.
Thanks for the interesting philosophical analysis of things. The (false, as you show) claim that if there are statistics then people probably act in light of them replaces a cause with a consequence. The statistics are because that's how people behave, and not the other way around (it's kind of like saying that if a woman had a child with her husband then she did so because most of the reasons were for the husband…). On the contrary: most of the time people examine statistics accurately – it's not to act in light of the results, but to find the characteristic that will allow them to overcome the statistics and fix something in the world…
Thanks for the analysis. In light of your words about people thinking like machines, it would be interesting to define a reverse Turing test that tests the person and not the computer. A test in which a computer is compared to a person and it is necessary to identify whether the person has managed to be recognized as a person. (I am aware of the formal equivalence with the regular test)
So we created man.
And what about the verse “Whoever sheds the blood of man, his blood will be shed, for in the image of God he made man ” ?
And reason suggests that murder will not be considered a crime – because what crime is there in stopping the operation of some algorithms.
I read in the book I think by Professor Yakir Shoshani, a description of a man hospitalized in a hospital,
who after a few days is told: “Your physical condition is quite precarious. We have made an exact copy of you, and this copy is extinguished.
Don't worry. You will continue to exist in the second copy” .
What will the precarious copy do?
I think there is some evasion here – after all, both the Chinese room and Mary's room are precisely intended to draw more precise distinctions in questions of understanding and knowledge, to distinguish it from pragmatics (in the case of the Chinese room) or to include phenomenology in it (in the case of Mary's room). You are right that the fact that AI is able to do the same things as humans does not necessarily make it human, but the question remains what does. Alternatively, if someone comes along and claims that he has actually managed to upload his consciousness to a computer, how can we know whether he has really succeeded or not?
And on the sidelines – chatGPT and its ilk are not “burdened” with leftist things, but there is a crude intervention at a later stage to prevent him from saying certain things, and in any case the distinction between left and right in this context is very dichotomous and unhelpful
I don't know what it is, as I think it is the soul (our spiritual dimension). My starting point is that each of us feels that the software does not have the mental layer. So the burden of proof is on those who claim that it does. My argument is that its functionality does not prove anything. I don't see what the evasion is here.
These programs are definitely trained in a leftist environment, and this is famous, and this tilts their ‘worldviews’ to the left. I'm not talking about the political correctness you mentioned, but about the training itself.
The evasion is of course based on the assumption that there is a mental layer that is not expressed functionally, which may be true (or at least sounds intuitive) but does not answer the question of how it is possible to determine whether or not there is a mental layer. Our feeling that the software does not have a mental layer is not very interesting given that it is based on familiarity with other programs, the question is how can we know this about a new type of software
Regarding the left-wing training, I would be happy to be enlightened. I know of many claims that the final results are left-wing because of those crude interventions, I do not know of a single claim that the training dataset is left-wing. It also does not sound particularly plausible to me because I have no idea how to compile a huge, left-wing dataset
Indeed, that is my claim. Just like with us, we have free choice in my opinion and I have no experiment that will prove it. Just as there is no experiment that will disprove it (see the next column that will be published soon). There are things whose existence cannot be determined empirically (such as the existence of God or even our own existence from the perspective of idealists). You can get an immediate impression and that is it. Disappointing? Right. But what can you do, life is not a picnic and not everything is accessible to scientific methods. See again in the next column.
Regarding training, think about the press that feeds the program. If you feed it the New York Times or Haaretz, and especially if you give these sources greater weight than their counterparts, you get left-wing software. You can call it gross interference or not. But these things are from knowledgeable people that I have heard and I do not have sources for this. In principle, of course it is possible.
https://www.ynet.co.il/digital/technews/article/s1cwdibut
I admit that I skimmed through (short on time), but still, only regarding 2 matters you addressed:
As you mentioned, ChatGPT (to the best of my knowledge, the most advanced natural language processing-based AI available today) provides really unreliable information in minimal language, so there's no reason it can write in a similar way to people like you, who rely on a deeper theoretical understanding of the material you're talking about, and in general, aside from its incredible ability to help programmers, it doesn't do anything impressive beyond simply formulating answers that relate to what the user writes to the subject, managing to pull some of the information that really is relevant from its database, and formulating itself without spelling errors and in a fluent, fluent, and coherent manner.
All OpenAI did was upgrade the project debater so that it can talk about everything and not just in the context of debate.. It may sound really complicated to do, but in practice it is really easy because it is based on exactly the same technologies (NLP, data mining, machine learning, deep learning) and concepts from computer science, no wonder openAI developed chatGPT with a small group of a few dozen employees over a few years. It is much easier than it sounds.
I think you understood this message very well yourself, even if you are not aware of what I wrote. ChatGPT in my opinion really does not pass the Turing test, and its way of “thinking” (which is not really actual thinking but just predicting the next words in a row, by the way!) is not similar enough because it cannot trace sources of information from the hydrogen (whether on the Internet or by contacting real people who understand the subject).
And therefore there is no reason that this kind of AI will be able to replace rabbis, philosophers, scientists, etc. in the near future. They are literally nowhere near the thinking abilities of people like you. The articles he writes in response to user requests are not always even syntactically correct, it is not clear what sources of information he relies on (maybe he just relies on Wikipedia, blogs without references, etc.),
Sometimes I feel like even video game AI is much closer to AGI and passing the Turing test than ChatGPT, because even though they are clearly not AGI, are never based on NLP and can do nothing but just play children's games, they do an excellent job of tracking real players, using machine learning and neural networks to “get better at the game” Etc.. and he can play at a very high level so his abilities are much closer to those of a real human player than the completely incorrect information that chatGPT always provides..
I'm also interested in what linguists think about chatGPT? Michael, if you know anything about this (and/or if you mentioned it in the article and I didn't notice) I'd love to know.
It is important to emphasize that chatGPT, by the way, is just software and nothing more, on a material level.
It runs on code only (of a system of software whose integration and “communication” occurs through, you guessed it, code and internet communication).
So if these things, combined with a little UI that provides the UX (user experience) of a conversation with a real person, can make users think of the software as if it is able to think/speak/understand/has intelligence, as you referred to in your article..
In my opinion, it is all about the user's ways that help him use the software and understand how to use it, how it works and what is happening in it at any given moment.
Among other things, the software's ability to transmit the properties of the human soul is thanks to the application of computer science theories from the field of neural networks, with which AI is developed based on the principles of how the human brain works, right at the level of the connections between the various neurons within the brain and the integration and “communication” between them (this is what is meant by the word networks).
Let's not forget that AI means artificial intelligence, the word intelligence is included in the definition of the concept.
So perhaps it would also be useful for the team from openAI that developed it (and especially the programmers among them), to help provide information about the software's code that would help understand whether it will really use the concepts of neural networks and the concepts of understanding/thinking/speaking/intelligence.
What do you think of this, Michael? Is there anything new here for you? Where am I wrong or right?
The fact that artificial neural networks are the ones that apparently led to a breakthrough may not be a coincidence. Like the brain, they are also based on the conduction of electrical potential, on signaling between modular units (cells) and plasticity. Of course, biological cells are thousands of times more complex. The fundamental difference from the Nazis in the classic computer game Wolfenstein is that these triads have learning capabilities. You can see examples of robotic systems that learn to walk like a baby on YouTube, or those terrifying robotic dogs of the US military. Even at the level of my instincts as a human, the feeling is that this is a biological creature. Unlike most mammals except humans, walking functions require training and learning in some of these robots. A biological creature is not only a brain but also a body, and it may be that the feedback between the body and the brain plus sexual reproduction is part of the definition of life. I would say that even if one day in 50 years it were a creature similar to a human, for that matter, I would prefer to kill it like in Blade Runner.
As far as I understand, their code and technology are open to all.
Where is the code then? Here?
https://github.com/chatgpt/chart
Regarding the attitude of linguists - as with everything, there are opinions here and there. See, for example, here: https://ling.auf.net/lingbuzz/007190
I don't know what linguists think about it.
I think it's pretty close to passing the Turing tests. It's true that the clumsiness of the formulations and their genericity hint at its mechanical identity, but there are also quite a few people who react that way.
It is known that in the world of chess, AI programs play moves that humans do not understand why they are good, but it always turns out after many moves that they are better than what humans would prefer to play. If in the future there were a similar situation in the world of Torah and Halacha, where computers knew how to generate Halacha rulings that humans would not understand why they are correct, but would agree that AI is probably closer to the truth even if we do not understand. In such a situation, would we rule according to what AI tells us? Or, in accordance with the value of autonomy, would we prefer to rule according to human thoughts, knowing that we are probably further from the truth. I am asking not only about those who are the most bar and rule for themselves (where the value of autonomy clearly outweighs). I am also asking about those who are not the most bar and need to choose a rabbi for themselves, is it more correct to choose a human rabbi in order to realize the value of autonomy through him perhaps? Or is it better to choose an AI rabbi in order to get closer to the truth?
They simply know how to calculate more moves ahead. Just as they do multiplication faster. This advantage is probably also relevant to reasoning, although there the problem is usually not computational power and complexity but simply mistakes. This is assuming that these programs have no disadvantage compared to humans in creative thinking, etc. It will be discussed in the next column.
It's not exactly calculating more moves ahead. Basically, chess AI programs don't scan all the possible moves in chess because the number of possibilities is so large that they technically can't scan them. What they do is they recognize patterns of moves from examples they've been given in the past and know which move will produce the best outcome statistically without relying on mechanical forward calculation.
You are talking about the question of how the software is brought to this ability (whether by scanning for possibilities, as in classical software, or by training – like artificial intelligence). But I was talking about the product of the process, and it is characterized by the ability to predict more steps ahead than a person. What I argued is that in halakhic jurisprudence, this is not the important ability, nor is it the basis of human weakness.
Okay, but let's say hypothetically that there was a computer that knew how to rule better than a human, would we follow its rulings even where we disagree with its ruling but recognize its superiority?
This is no different from a human referee.
Do you assume that the foundation of halakhic rulings is similar to moves in chess? (No atonement begins)
Will AI software strive to prove the existence/nonexistence of God?
Where does this strange question come from? It depends on what software? Who trained it? What does it mean that it will “aspirate”? And why is all this interesting?
Aspire, according to the information it contains, without being specifically programmed for the question. For me, this is the measure of the existence of consciousness; if the answer is negative, then it is easy to distinguish between the software and a human; if the answer is positive, then we have succeeded in creating a human in the body of a machine, with all the implications that this entails.
Regarding Chattgpt's leftism, see here:
https://www.ynet.co.il/digital/technews/article/s1cwdibut