New on the site: Michi-bot. An intelligent assistant based on the writings of Rabbi Michael Avraham.

The connection between language and thinking

שו”תCategory: philosophyThe connection between language and thinking
asked 4 months ago

Hello Rabbi Michael

Recently, in the wake of the rise of artificial intelligence, I’ve been thinking about the connection between language and thinking again. I don’t know if there’s any point in trying to guess whether Chat Giphyti and his ilk have consciousness, because I don’t know how you could test such a thing even for people other than me, let alone a computer, but the fact that AI engines are basically language engines, many of which, at least from what I understand, are based on statistics (whatever word is most likely to follow the sentence, based on the vast amounts of information fed into the computer), has made me wonder how much of human communication is really thinking or is it simply a skillful and empty use of language. You don’t need Chat Giphyti to know this – just read opinion columns in newspapers, but it does raise the question.

A few things come to mind:

1. There is a difference between language and thinking, it is impossible to say that they are the same thing. Sapir-Whorf tried to claim that they are the same thing and this hypothesis was refuted in a thousand different ways. On the other hand, the intersection is not necessarily empty (of course, depending on the definition of the word “thinking”), and in my opinion this raises the question of what thinking actually is – and the ability of artificial intelligence to imitate thinking through language sharpens this question. On the one hand, I think that language does indeed greatly assist thinking because it allows us to define things and organize concepts in our heads, but words are not identical to the concepts they represent, which often exist in my head even without me having a way to describe them. In addition, skillful use of language does indeed reveal the content of these concepts – and when I say language, I also mean logical syntax and mathematical notation, which are also a type of language. But they are not enough. Where does the element of thinking come in? Is it only in those synthetic-a priori moments or is there something beyond that? Is analytical thinking even thinking or is it just a careful and skillful use of language?

2. Language itself consists of a technical layer of syntax and vocabulary, but syntax itself reflects thought processes and the conceptualization of the world. I’m not a great linguist at all, but to the best of my knowledge, Chomsky tried to find the basic structures on which every language is based – I understand that today it is less popular, but the idea at its core seems correct. The result of this is that skillful use of language can not only imitate thinking, but can actually perhaps manage to arrange concepts in a way that will lead to results that we did not know about before – or that we knew but did not know that we knew (as I wrote earlier) – not only as a tool but as a reflection of existing structures. How much of our thinking is this thing? Perhaps again I am simply talking about analytical versus synthetic thinking, and then the question arises where this synthetic thinking is located and do you think it is possible for a computer to demonstrate such thinking. In other words – what is the relationship between thinking and consciousness?

3. The parable of the Chinese room shows that it is possible to imitate thinking without real thinking behind it. I think that artificial intelligence sharpens this parable and takes it to the extreme, because it shows that it is possible – at least conceptually (assuming that the computer lacks consciousness) to actually think and reach new results without “real” thinking, that is, without consciousness. To me, this makes human consciousness an even more puzzling and even more “superfluous” concept from an evolutionary perspective, even though for us it is more or less equivalent to the concept of “life.”

4. It also comes to mind after several years of Zen meditation, where you can actually see how the mind “interprets thoughts” – as if thoughts are simply something that happens to a conscious being and that they are not the product of my will. When does thinking stop being like that and start being a conscious process? This sharpens my attempt to distinguish between thinking and consciousness again.

I think these are topics that warrant a column on your blog, don’t they? Because I think we need to sort out (for myself too) all these important philosophical questions that artificial intelligence raises – again, not in relation to artificial intelligence per se but in relation to ourselves, how much of our thinking is real and how much of it is an associative flow of words within empty syntactic structures, and how many of these syntactic structures are indeed empty (because at least some of them indicate a very useful conceptualization of the world).

I thought about all of this this morning and kept wondering what you had to say about these topics. Sorry if the writing here is not organized enough, it seems to me that I repeated the same question in different ways.


Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

Leave a Reply

0 Answers
מיכי Staff answered 4 months ago
Thanks for the letter. Indeed troubling questions. There are some very important questions here. The change of direction from the discussion of AI to the discussion of us is interesting and necessary. I don’t really think it’s a different discussion, but rather two sides of the same coin. At first glance, there are several different questions here. Not only did you not repeat the same question several times, but in each individual section several different questions appear that you did not always distinguish between. The question of which part of our thinking is analytical (and what is your definition of analytical for this matter? Should you take into account all our assumptions? Only the conscious assumptions? Everything that is inherent in us – which is perhaps everything according to the mechanists?), Is what is analytical even thinking? The relationship between thinking as a process (mental or neuronal, meaning and use – Wittgenstein and the Chinese room) and content that in itself deserves to be called thinking (perhaps only synthetic content?). Does the mental emerge from the physical or an epiphenomenon, or does it motivate the physical? And a few more. I have already written about most of them many times from these and other angles. Only now did Sarel Weinberger (a yeshiva graduate) conduct an interview with him on these topics as part of an online course he is building at Bar Ilan: https://youtu.be/TnKOuuz_NTM?si=xJR4onzYHFCOjhNL The interview was conducted following columns 590-592, which were dedicated to the issues of AI and its relationship to us. Columns 35, 175, deal with the difference between judgment and mechanical processes (before the new AI era). The reference to questions of thinking and language and Whorf are discussed in the series of columns 379-381. I plan to write something else soon about another aspect that came to mind while reading about an interesting phenomenon in AI. But organizing all of this would require a book, in my opinion, and I’m not currently involved in writing books (I don’t believe in this medium anymore) and I’m not well-versed enough in innovations in the field of AI itself. But I will think about whether there is anything else I have added value in (i.e. that philosophical thinking can lead to refinement and conceptualization, without being up to date with the details of current innovations).

Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

אלמוני replied 4 months ago

Thank you very much! Sorry I didn't answer right away, I only get to my email in the evening and I didn't have time yesterday.

Indeed, my questions were written in a mixed way – I wrote it early in the morning and in a hurry, and I agree that there are many different questions within the same section. I think the main thing that bothered me beyond the questions you formulated was the element of language. For me, when I spoke of “analytic” I meant both the basic assumptions and the logical deductions, but not the way we came to formulate or knew the basic assumptions. In the context of what mainly concerned me – is, as mentioned, the matter of language. The thought became more acute for me when I read something by Roy Tsezna, who writes a lot on the subject, and he said that even the powerful engines are essentially language engines, all of them. And then it suddenly gives language a different status than I had attributed to it until now. The possibility arises that it is enough to teach the computer to speak properly for a real thought to emerge from it - or at least a process that accurately imitates real and conscious thought, or at least accurately enough for any practical need. And then I wonder what is it about language that makes this possible? The possibilities that came to my mind were:

1. Language is a representation of other logical or mental processes, and therefore it is enough that we manage to use language properly to create a logical process. The representation becomes the essence: either literally, assuming that the computer has consciousness, or only when there is a conscious person who reads the words and understands what is written in them. But in any case, it is impossible to distinguish between the two unless you are the person speaking/thinking the things. I think this is one of the things that motivated Russell to create a mathematical language that would not have paradoxes: to cleanse the language of its illogical elements so that language and logic become equivalent. It is indeed impossible, but it may be possible enough for practical needs.
2. Language itself is built on implicit concepts that we have and on relationships between objects, in space, time and causal relationships, and this is embedded, for example, in the natural syntax of the brain that Chomsky talks about, and therefore, as soon as we learn to speak, it also includes processes of thinking. Thinking is embedded within the syntax. This still means that from the computer's perspective, we may be in section 1, but from the perspective of the person speaking, language is no longer separate from logic - not as a representation, but because language itself is truly a logical process. Of course, it is not perfect because I can formulate absurd sentences in language.
3. Human language is also not really thinking, just as the language of the engine is not really thinking - it can be, but thinking is a separate process that runs parallel to putting things into words. Language can help this process or hinder it or simply reflect it, and it can run completely separate from it.

I will try to read the columns you sent me and watch the interview, but it will take me some time because I work from morning to night and I don't have a smartphone, but I will try and think about it a little more..

By the way, have you thought about maybe producing a podcast? It's a little more complex than recording lessons in my opinion, and if you find a partner with whom you can record and have an intelligent conversation about philosophical issues – it could be very successful, much more informative, because today people consume a lot of information in this way and not by reading.

Thank you very much!

מיכי Staff replied 4 months ago

I am confused about the language questions you raised. I think that behind language there are ideas, and the mechanics in which AI software operates do not mean that it deals with language. Even the name language is a representation, except that the things represented are not in the ”knowledge” of the software but in the programmer.
The software is trained through answers and feedback to cases, and these are not determined linguistically. For example, you train a software to recognize someone's face. The training is through examples and giving positive or negative feedback for its correct and incorrect identification. You understand that the feedback is determined by the question of whether it identified correctly or not, rather by the content. It is not just a formal syntax and that's it. Therefore, it shapes the network in a way that will respond to semantics. This is not a purely linguistic operation, although it is represented by language. With us, ideas are also represented by language, but there are ideas behind it. The person in the Chinese room receives feedback from Chinese speakers and not just syntactic connections.

As for the podcast, it's an interesting idea. I'll think about it, although lately I've had a lot of requests for existing podcasts and maybe this already meets that need.

אלמוני replied 4 months ago

I thought about your answer – I agree in principle because this was the way I thought about things until recently, that you can play with words using a computer, but the content – the meaning – remains with us and language is nothing more than a representation. But this perception is now being undermined for me because it seems that the artificial intelligence engines that are based on language are managing to imitate thinking so well that it seems that it is already difficult to distinguish between what they do and normal human thinking, and to the best of my knowledge all of these engines are essentially language-based (even the advanced engines). That is, even ten years ago we had neural networks that could solve problems using a huge mass of examples, but the turning point in the development of artificial intelligence engines began with language engines. At least to the best of my knowledge, it was not much. If this is true, I continue to wonder whether the picture you present – which is the picture that I also had until recently – Is it indeed true or is there something in language that is more than mere representation? It should be emphasized that I do not think that language is the same as thinking, and I raised this possibility only because it is a possibility that should be considered when trying to think about the subject, but it is not difficult to rule it out both conceptually and phenomenologically (I know from within myself that one can think without words and one can speak without thinking). But if it is true - then the question I started with two weeks ago returns for me: to what extent when I myself make an argument is there thinking behind it, or perhaps the thinking comes only after the argument has been made or not even then - that is, it provokes thinking about when language itself succeeds in "thinking on its own" and when it does not. As an example, I can give something similar but different - mathematics. If you give me an exercise in mathematics, let's say investigating a function, then I can observe the function and describe it quite well from an understanding of what it does and tell you what it will look like. That is thinking. But I can also sit with the algorithm and derive and build equations and transfer sides and place numbers, etc. ’ – a process that can be purely mechanical (and as long as we adhere to correct syntax, it seems that Lish will not make a mistake and we will always reach results) – and reach the same results even without understanding what I am doing, just as 3-unit students solve without knowing what they are doing. So in mathematics, the powerful language developed there “thinks” for them. Clearly, this is an example that fits what you wrote – the language does not really think but rather incorporates processes developed over hundreds of years of thinking and are only the bottom line, and the meaning given to them is only in the mind of the solver. But there is no doubt that mathematical language has great power that is largely equivalent to thinking, in terms of the results it produces – And in many cases it will also replace thinking, when the function is too complicated and I have no way of predicting its behavior, as in a chaotic dynamic situation, and then we rely on language alone if we want specific results. Then the question arises – What happens with natural language? Does it also have a certain degree of this power? I feel that language engines suggest that it may, and then it is perhaps very intriguing to think about what it is in language, in words – the natural language, not the artificial mathematical one that we created on purpose so that it would think for us – that makes this possible.

I saw that you started writing a few columns on the subject and I look forward to seeing if there will be a reference to this direction in the end, if at all you think it has any benefit. What is more, a small correction to the previous column – I am not a “former student”, I am a “student” 🙂

מיכי Staff replied 4 months ago

I intend to write the column after the next one (I added another one, beyond planning, before it) about this. In general, I do not see anything special in language engines beyond any other machine. The performance is of course much more powerful, but the whole argument is that use and meaning are not the same thing. The example of the student and the investigation of the mechanical function sharpens this well. Therefore, even where the use/syntax manages to reach the same results as humans, it does not mean that there is thinking there (as in the student). As mentioned, water also has tremendous abilities in solving Navier-Stokes equations (it is true that not in other fields, but it is a quantitative difference).
In my opinion, language engines do not really challenge human perception, and this is what I explained in these two columns. In the next column I plan to deal with the opposite question that you raised: not whether the machine is a person but whether a person is a machine. That is, whether our thinking is purely mechanical. Or, unfortunately, to ask whether language engines lead more to this conclusion (probably yes). But the question of whether a machine is a person is a different question. There is a difference between comparing performance (the Turing test) and deciding that it is a person and has thinking and awareness.

מיכי Staff replied 4 months ago

I think the discussion at the end of the last column about thinking in stages (dipsic) illustrates this distinction very well.

אלמוני replied 4 months ago

I haven't read the last column yet – I absolutely agree that the mere fact of arriving at results does not indicate thought, which is why I gave the example of the student and the function, but one of the things that interests me, besides the question of whether a person is a machine, is what is in language that makes this possible. I know what is in the language of mathematics that makes it possible to arrive at results, but it surprises me to discover that there is probably something like this in natural language as well. But I will wait to read what you wrote..

(I still don't know if it is possible to reach the conclusion that our thinking is purely mechanical, I still allow for the possibility that there is a certain component in thinking, which is expressed in rare situations and not in routine thinking, which is not mechanical. But I have no real basis for this claim other than the different experience – there is a difference in feeling between an insight that comes after organized thinking and an insight that comes from inspiration, perhaps a bit like the difference between standard science and a paradigm shift in Kuhn. But of course it is possible that this is just a different feeling and that in fact it is an unconscious mechanical mechanism).

מיכי Staff replied 4 months ago

I don't see this as a feature of language. It's about the content that language represents. The AI software is trained on content. The fact that it assimilates and processes them through language is also true for us. Language expresses content, and the correlations between words are not a feature of language, but of texts that express content and use language. The software does not arrive at its answers through syntactic considerations, but through correlations between words, and these are determined by the content (in the human texts on which the models are trained).
Think about it, if you were to train the model on texts that are syntactically correct but express incorrect and stupid content, you would get results that are worthless. Hence, what determines is the content (which is expressed in language) and not the linguistic structure and language itself.
As mentioned, I will get to the question of mechanical thinking in the fourth column.

חד גדיא replied 4 months ago

On the sidelines of this discussion and perhaps as an appendix to it with certain implications: Will a person who speaks two or more languages, when the same thought occurs to him in one language, be the same - cognitively - as the same thought in another language? Intuition tells me no.

מיכי Staff replied 4 months ago

I didn't understand the question. Are you asking whether our thinking is done in language or the ideas themselves? It is commonly thought that this is a dispute between Toss and Rashba, but it is clear from the explanation that we usually think in language, but I suppose it is also possible without language. See the series from column 379 onwards.

Leave a Reply

Back to top button