Machine learning
In the SD
Is machine learning, like unsupervised neural learning, not a problem according to the Platonist claim that concepts cannot be understood without reduction to previous analytical concepts or distinction of patterns in a purely empirical manner?
For example, it is able to diagnose from pictures of cats and dogs what is a cat and what is a dog, without any prior definition.
Discover more from הרב מיכאל אברהם
Subscribe to get the latest posts sent to your email.
Discover more from הרב מיכאל אברהם
Subscribe to get the latest posts sent to your email.
K
Everything there is well-defined mathematically, so I don't see any contribution to the philosophical discussion there.
An unsupervised algorithm is not exactly training. You give the system all the inputs but without telling it what the correct classification of each input is. So what does the system do? For example, there is an algorithm that does clustering, that is, dividing into groups. For example, you enter a thousand points in a plane and ask the system to find an ‘optimal’ division into N groups of points so that the distances within each group are as small as possible and the distances between the groups are as large as possible. So all there is here are definitions of the distance within a group, and of the distances between groups, plus an iterative (deterministic) process that improves and converges.
If there are a thousand 10 by 10 images (that is, a thousand points in a hundred-dimensional plane) and you ask the system to find an optimal division into two groups, then it is quite possible that this division will coincide with the cat/dog division. Because the ”distance” between two pictures of cats will be smaller than the ”distance” between two groups of dogs. The distance function can be defined and adjusted in different ways. Even better results are expected if, instead of entering the pictures as they are, they are entered after an encoding process (for example, the output of inner layers from another network that does other things more broadly and is trained in a supervised manner).
By the way, even in supervised, since overall it is a serious mathematical process (even if calculated only numerically) of finding a minimum for a function with n variables, then I have a hard time understanding why it is linked to philosophical discussions.
It still uses inputs, even through its programming. This is not a creation out of thin air.
As far as I understand, your claim that this is a definite calculation and therefore unrelated to philosophical discussions is the same as my claim that there is no understanding and identification here.
Understand
I did not fully understand the commenter's words, and therefore I will address the rabbi directly,
First of all, the rabbi claimed that a machine is not called a learner but a tame one, in my understanding he claimed this only because it does not understand.
But it sounds extremely puzzling that this fact is related to the issue here.
After all, the whole purpose of my claim is only to overthrow the famous argument for the Apollonian method that says that the world cannot be understood without prior ideas, “for example, that it can be classified in an infinite number of different ways” like in the style of claims about essentialism in biology, etc.’. (Which the rabbi also uses).
So although I agree that the machine does not understand and like the parable of Mary's room, there is no identity between “understanding” and “dividing” into different categories.
In fact, as far as I know, in unsupervised learning, you need to provide the system with a large amount of images (inputs), for example of dogs and cats, *without saying* which image is of whom. (i.e., it has a classification of a dog or a cat – unlabeled) tell it to classify the image into two, and it finds the algorithm for the most appropriate classification. (For example, identifying shape patterns, color, or criteria that we don't even think about). And from a factual point of view, good systems manage to classify just like humans – into a dog and a cat.
Dog’ Otherwise, if you give it an image that has a lot of red and green apples, it will be able to “divide” between them and classify them into a red and green apple. And so when they give it an apple next time, it will be able to guess whether it is green or red (and again, here the inputs do not come with a prior statement which apple is green and which is red).
Also, the argument about input is even more unclear to me, because how is it different from the eyes? We see many items in the world and divide them into patterns. Does the fact that we have eyes reject Aristotle's method and strengthen Plato's?????
So even if this algorithm operates under a defined calculation, why can't we say that this is how our brain is programmed to operate? And in any case, it would be without ideals.
I don't understand what's not clear. A machine can be made to behave in a certain way. What does this have to do with Plato? He was talking about our way of understanding things. I can make a football fly north, does this contradict the (fictional) Plato who said that it is impossible to go north without being decided? Or did the football decide?
This is not training, this is learning. The machine learns.
Whatever you say about the machine, you can say about the person.
But here it is not exactly that we make the machine behave in some way, but that it manages to perform a classification according to “itself” in a way that is consistent with the classification of B”A.
If so, it is already related to Applanatianism, which claimed that in order to perform a classification *you* need to have prior ideas for it. Because otherwise you can classify in any other arbitrary way. This is not proof against it, but only eliminates its support from this evidence. (Which the Rabbi also uses to the best of my memory).
Maybe I will ask the opposite,
Is it if we deny the ideational sense, so that in your opinion you enter a room full of dogs and cats of all different types and colors, without any prior acquaintance and knowledge about animals in general and about dogs and cats in particular.
Can we classify them into the category “X” And category “Y”? Or not.
If you think it is not possible, then the work that the computer can do like this, shows that it does not need any ideas but is related to the activity of mental calculation or something like that. (Aristotle as you say)
As I said, the computer does not understand the difference but only acts as if it does. Plato, I believe, argued something about understanding. It is true that if we see understanding as an epiphenomenon, that is, as a product that accompanies the mental calculation and not what generates it, then perhaps your argument is valid. It can be said that a person calculates in his mind as the software you described does, but in a person the result of the calculation obtained in the brain is accompanied by understanding in the mind. Here, understanding is a by-product of the calculation that was made automatically.
But beyond that, we must remember that this computer was also programmed by a person, and the person has experience in such distinctions.
Regardless, I have written several times that I think such an argument is quite weak.
Thank you very much,
Regarding the first paragraph,
This is the point I really meant, I just didn't know how to phrase it in these words (understanding as an epiphenomenon). This is also the reason why I didn't understand the division in the first two lines and in previous answers, regarding the difference between understanding and a machine.
So, in short, I didn't really understand what was meant by that concept that divides understanding and classification (or “taming”) in a nutshell? Also, what reason do we have to accept a different approach than to see understanding as a certain form of “epiphenomenon” (on a dualistic medium of course), isn't that the reasonable and simple view? Even if we say that there is interaction in certain cases such as in judgment, but with regard to the ability to classify at least the ideational approach can still be rejected, and this significantly reduces the amount of ideationalism in the sky… Did the Rabbi write about the subject elsewhere? Because it is really unclear.
We can give an example for this, even the Rabbi would agree that we have free choice (dualistic interaction) and yet vision is an epiphenomenon of physical processes in the brain. Which are only reflected in consciousness as colors.
So if this is true, there is no reason to assume that the classification cannot be carried out as a by-product of the calculation that was made automatically. (And in any case, to give up the world of ideas).
Regarding the claim that the computer was programmed by a person, I do not know these algorithms well enough, only to the best of my knowledge the programmer does not actually try to cause classification according to one criterion or another. Just as there is no label for those inputs.
And regarding the claim that this argument is weak, it is precisely to our extent that if there is no evidence then we use passwords, and ad hominem while it seems that you understood it well. “So maybe your argument really has some merit” 🙂
Vision is an epiphenomenon of physiological processes. But understanding is not like that, otherwise our judgment has no meaning. The same goes for will. If it were an epiphenomenon of the brain, then we would have no choice.
I wrote that this argument is weak, because we have the ability to synthesize concepts. The line between there is from something and there is from nothing is not sharp. This is parallel to the weakness of anthropological vision (the evidence for the existence of something by the very fact that we have its concept. Here too, a person can synthesize concepts).
I'm not sure that's entirely accurate, even if vision is an epiphenomenon you still have a free choice where to go.
And so even if the classification between objects is an epiphenomenon of the brain, we may still have the ability to decide at a higher level which argument is correct, (while using current definitions or with the help of the instincts of a new definition).
B. Is understanding a concept fundamentally different from the classification from the form of the computer (assuming it is as advanced as it should be)? After all, understanding also does not perceive the thing in itself but only its manifestations. But if so, it is not much different from the computer managing to classify things from many inputs. And with us, concepts are usually understood from repeated observation of the physical world. And not from internal observation (due to Aristotle's mistakes).
Regarding the ending, I understand that you believe that the argument in favor of the Platonic approach is weak, but the fact that concepts can be synthesized still means, in your view, that a more fundamental concept exists.
Leave a Reply
Please login or Register to submit your answer