New on the site: Michi-bot. An intelligent assistant based on the writings of Rabbi Michael Avraham.

Robots on the battlefield

שו”תCategory: generalRobots on the battlefield
asked 8 years ago

peace,
The religious leadership is the only one capable, it seems, of saving the State of Israel from the abyss,
whose most severe peak (or trough) to date was expressed in the things I refer to below.
The question is whether the religious leadership will understand, at least in the eighty-eighth minute, that reality demands a clear halachic approach.

In the Tuesday issue, Brigadier General Ziv Avtalion wrote about the entry of robots into the battlefield, and argued that when the mission is dangerous, the use of robots is essential . Avtalion points, for example, to a scenario of infiltration into the country’s territory, following which “drones and vehicles are sent to surround the terrorists… The entire decision-making process is carried out by a computer that is well-acquainted with all the cases and reactions, and all the forces that were deployed were robotic.” There is no doubt that technological means of this kind are and will be entering the IDF, and there is no dispute that saving soldiers’ lives is an important principle. However, it is not the only principle. If it were, it would be better to drop a hydrogen bomb and eliminate all the people in some enemy territory. Is this the path of the army that considers itself the most moral in the world?
Another problem that arises from Brigadier General Avtalion’s words is the issue of computing moral judgment : Like all the children of Noah (and humans in general), we are commanded in the Bible that “whoever sheds man’s blood, his blood will be shed by man, for in the image of God He made man.” This normative principle imposes a heavy responsibility on every person who bears arms, and at the same time expresses the principle of the sanctity of life. Should we allow programs to decide on the taking of human life? Moreover, do they have the appropriate tools for this? It seems that the world is currently suffering from data , a concept that states that information (data) is sufficient, in itself, for taking a moral position. However, leading robotics and ethics researchers such as Colin Allen believe, for several reasons, that this is wrong. Therefore, the IDF’s ethical insistence so far that behind all weapons there should be human decision-makers (which led to the decision of the IDF Commander Amir Eshel to change the name of the UAV to the Remotely Manned Unmanned Aerial Vehicle ( RPAV )), is a fundamental human and Jewish principle that must be adhered to.

Good news,


Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

Leave a Reply

0 Answers
מיכי Staff answered 8 years ago
I’ll start from the end: I completely disagree with these things. They are completely absurd in my opinion, from beginning to end. First, similar problems arise with respect to an autonomous vehicle that is supposed to enter widespread, public operation within a few years (it is already operating in various pilots around the world), and there they are much more serious, because on the road there are many innocent passengers and the situations will occur thousands of times every minute (in contrast, on the battlefield these are individual and relatively rare situations). Second, the excessive caution regarding the lives of innocent enemy soldiers is here contrasted with the sacrifice of our soldiers’ lives. Why do you prefer the former over the latter, even without justification? This is a groundless thesis and you present it as if it were a self-evident principle. In particular, there is doubt about the risk in exceptional situations regarding those innocents, compared to a certainty of risk in all situations involving our soldiers. In closing, I would like to note that a robot’s moral judgment can be much better than a human’s, as it is faster and less affected by emotions and fears. I don’t understand how you can trust a human more than a robot (assuming it has been well-programmed and tested in various situations). The chance of a robot making a mistake is much smaller than the chance of a human making a mistake. I will only point out that this question has nothing to do with the essence of morality and the robot being a creature devoid of moral responsibility. The question here is consequential (preventing harm to innocent people, not who to prosecute for an error). The responsibility lies with the programmer and his commanders, who are human beings and have moral responsibility. Therefore, I am completely in favor of this direction on a principled level, although of course I am not involved in the details.

Discover more from הרב מיכאל אברהם

Subscribe to get the latest posts sent to your email.

נ' replied 8 years ago

Dear R’ Miki,
I certainly believe that there is a priority for the lives of our soldiers, but it is not related to the issue. There is military ethics (its roots are already planted in the Book of Deuteronomy), and it is supposed to distinguish us from the wicked and sinners of all kinds.

Regarding autonomous cars, there are enormous ethical difficulties, and you are probably familiar with Rolly Belfer's words on the subject. It is far from simple.
Mobileye, as I understand it, is simply waiting for the courts to tell those who will sue them what the answer is and who is right, but it would have been appropriate for the world of halachah to have its say on this from the outset, and not in the method of ’supplied’.
Even if the answer is, simply, that in the great mass it will save many human lives, and therefore it is appropriate to promote it.
And yet, it is appropriate to discuss the programmer's considerations: is it better in the ’car dilemma’ to prefer to harm an old person? Two children?
Should the car stop when a cat is hit? And when a child is hit? No question is self-evident,
And the disturbing thing about Lieutenant General Avtalion's words is the fact that it is so simple and very simple.
Programmers will have to make many decisions about this, and the first ones who should think about it, and lose sleep over it, are the people of Halacha.

The robot's judgment may be better, no doubt. The question is, is it possible to program moral considerations?
A robot can certainly recognize faces better, and from a greater distance. But what gives it the understanding that in front of it is a person who needs to be killed, as opposed to, say, a little girl who just happened to be passing by?
If you can give me an answer to this, or refer me to a programmer who knows how to answer this, I would be happy to hear it. God is, in this case too, in the small details.

The question of programming morality for robots is related to a fundamental halachic question (I will write about this at some point):
Is halachic law a closed body of knowledge? For example, can it be concluded from the Rambam's Shenesh-Torah in a concise and simplistic manner whether and how the State of Israel should be organized?
If you are skeptical about this, and think, like Rabbi Daniel Sperber Yavdal and Rabbi Eliezer Berkowitz, for example, that discretion and changing reality have an important place in halachic decisions, you are in good company,
that of the Amora Shmuel, Rabbi Yosef Albo, the Maharshal, and others, who thought that Torah and values are not algorithms. The origin of the word religion, by the way, is in the Persian word that means data.
It is no coincidence that I insist that this idolatrous image of religion, as a frozen and dried-up data, is inappropriate for the Jewish religion in general, and for Halacha in particular.
Good evening and good news,

מיכי Staff replied 8 years ago

N’ Hello.

1. I did not deny the existence of military ethics, but rather claimed that my military ethics do not prefer the lives of the enemy's innocents over the lives of my soldiers, and certainly do not prefer to avoid risking them at the cost of risking our own soldiers. In your opinion, should we have given up on powerful weapons and focused on pistols because using them allows us to focus more on shooting at fighters and avoid killing uninvolved people? What is the difference between this robot and automatic weapons and ammunition that already make decisions in the field today? The difference is mainly quantitative. It is true that there are questions of proportionality and I completely agree, but I still really do not see where proportionality breaks down here. At least until you have shown to what extent the robot's system fails to deal with such moral problems (see the next section).
2. As for your claims about the software, it is clear that you lack basic information about artificial intelligence. You think that such a robot is taught through a classic program of IF commands, which feed it with the correct answer for each possible situation. But this is not the way in which a robot is “taught” to act morally. The logic is completely different, and there are certainly ways to teach it moral behavior using what is called a “neural network.” The advantage of such learning is that you don’t need to know all the situations with the relevant answers in order to reach the ability to decide in all situations. The more situations you feed it with correct answers, the more it will know how to behave correctly in new situations, no less than a person and probably even better. This logic is incomprehensible to someone who is not familiar with it, but forming a position on this issue requires entering into it and understanding it. Otherwise, you are simply not using the right tools to discuss the issue. If you assume that the software needs to be fed with answers to all situations, you are simply not up to date with this technology. Even in recognizing writing or faces, you won't be able to get far with classical software, so a neural network (a learning network) is used, and it is a fact that they achieve not bad results at all (and it is constantly advancing).
Therefore, contrary to your assumption, it is definitely possible to program moral considerations. And it is probably possible to do it not badly at all. In the next section, I will add that I doubt whether a person will behave better or more correctly in these situations.
And no, I do not assume that halakha is a closed body of knowledge, and no sources and thinkers are needed for this. This is a simple fact. It is still possible to program the robot to act according to such an open body of knowledge, certainly in specific circumstances of combat situations. It will probably do so no less well than a person would do with his flexible and problematic judgment in such situations. What you have put the issue on the question of whether halakha is a closed body of knowledge is also based on your assumption that we are talking about classical programming and not about a neural network. A neural network is built to deal with exactly such open areas.
You asked for a referral to a programmer. You should contact computer scientists who are familiar with artificial intelligence (any of them will be able to explain the basics to you in a reasonable way), and I promise you that you will discover wonders there. This is a completely different logic than the one you assume (in a nutshell). As I wrote, autonomous cars are already here, and once their software demonstrates a good “moral” level, no one will stop them. Believe me, companies do not invest billions in a machine that they will not ultimately allow to operate, without thinking that the ”moral” barriers can be overcome. This wheel has already been invented (although I assume that it is still being perfected these days).
3. Beyond all this, as I wrote, a person also has no way of knowing the answer in every situation, and I am not at all sure that his decisions will be better. Therefore, I do not see why he is better than a robot (with all his weaknesses – fears and intense emotions in such situations). What is the correct answer to the trolley dilemma? And is it clear to you that a person will make a better decision than a robot? And I will ask: What does the halakha say about it? Exactly what ethics say about it. I assume that most poskim will decide here according to their ethics (see the next section).
4. The question of whether its programmers should consult with halakhic scholars or other ethicists is a different question, but it is also true regarding the military ethics of humans. There, too, people don't really consult halakhic scholars, so why would you complain about non-halakhic robots? In general, in my opinion, halakha doesn't have much to say on this matter that would be unique to it. It is not fundamentally different from accepted ethics, and therefore I don't see the need for halakhic scholars to enter into these issues. I don't think they will necessarily do it better than others (I am one of those who believe that morality is by definition universal. There is no such thing as “Jewish morality”).

All the best,

נ' replied 8 years ago

Michi Shalom,
1.

​The bow and arrow is also an autonomous weapon, and in fact the electric fence is one too. But there are qualitative differences between them and a drone or a humanoid robot or something else that will operate completely on its own. The former do not decide on their own which direction to shoot, and are not “roaming” weapons as they are called today in the IDF. This is a qualitative difference, like the difference between a stone and a rabbit. A qualitative difference requires a thorough accounting, regarding the moral quality of the chapter's alternative, not just regarding its effectiveness in the very narrow sense of the word.
2. Are you willing to replace human judges with a camera that will recognize the defendant's facial features, monitor his emotions during the hearing, and upon hearing such and such arguments, and decide on his own, based on artificial intelligence, regarding the sentence of human beings? Is the judge's humanity only a source of such and such errors and biases (as is popularly said today), or is it also a necessary condition for empathy and, in this sense, for moral judgment? The second point is taken for granted by many today, and perhaps that is why when the verdict is given on the ability to replace a judge with a machine, it is ignored.

3. Autonomous cars are not a roaming killing machine.

The fact that this is not their purpose is important. And although they are not moral agents, in this case there is a difference between them and devices and agents whose direct purpose is to kill people. The argument of "no one can stop them" was raised as an argument for compromising with certain tyrannical regimes in the last century, but when it comes to issues of principle (and especially regarding them) I do not see it as a moral argument. Many things "catch" on the market, and if they are moral and beneficial, excellent. But many things enter today even though they are completely harmful, and therefore I am not willing to accept, in advance, the claim that a new product should enter our lives without any criticism, just because it is new or "innovative". We have already seen the failure of the experiment regarding the blind introduction of technology without appropriate regulation into the public sphere with electric bicycles on the sidewalks of Israel; I hope that autonomous weapons will provoke more preliminary discussion. But again, autonomous cars are not the issue here. Personally, I assume that they will do a good job.
4. If you mean natural morality, etc.

​’​ I certainly agree, but morality is not universal in at least two ways: ​a. ​Algorithmic morality (such as Kant's categorical imperative) is not necessarily ​valid morality – without the subjective component there is no person and no morality, hence the criticisms of Sartre, Gilligan, Dancy and others of Kant's pretension to universalize morality; b. Relying on particular religious moral systems is a condition for the ability of social ethics to function. Therefore, for example, it took Hanan Porat to enact the law "You shall not stand for the blood of your neighbor." It turns out that a thin liberal ethic is not enough.
Note

Also, you assume that we can program ethics, when we live in a human society that even with regard to ethics itself has been in a broken trough for a hundred and fifty years (and see Anscombe, McIntyre, etc.). So will autonomous weapons operate according to the ethics of Kant, Mill, Darwin, Nietzsche (I will refrain from mentioning Hitler's name...), or McIntyre? And on the halakhic level (which you are right about its integration into the general philosophical discourse), according to R’ Akiva or Ben Azzai? The inability to reach a satisfactory answer to the first questions should at least raise questions in us about jumping too hastily to the level of their planning.
Good morning,

מיכי Staff replied 8 years ago

Hello,
1. Not true. A bow and arrow are not autonomous weapons. This is exactly the halakhic difference between Eshu because of his arrows and Eshu because of his mono. The arrow carries within it the power of man and causes harm with its power. Fire moves with the help of the wind and is perhaps an autonomous weapon to some extent (like the damage of an animal). Neither an arrow nor an electric fence are like that. They do not make independent decisions.

2. If the camera proves reliable, then it certainly is. These programs are thoroughly tested before use. These concerns stem from a lack of familiarity with artificial intelligence, as I wrote to you. Empathy can also be programmed (not the feeling of empathy but its products, and that is what is important for judgment). There is no ignoring here (again, a lack of understanding of artificial intelligence). Of course, this can only be done when it is clearly proven that the software is indeed successful in doing so. Have you ever seen the movie HER? You should.

3. They are. What do I care what the machine intends? You're talking about damage from a beam that intends to harm? What do I care about the machine's intent? The question is what it does, not what it intends, nor what it is intended for.

By the way, here you are mixing technical problems of introducing technologies without sufficient testing with principled objections to replacing humans with technology. I agree with the first and not with the second. In your opinion, electric bicycles have not been properly tested (and in my opinion, they have, but the law is not enforced against them. That's a completely different question), and what about cars? What's the difference between bicycles that you oppose and cars that are not? Do bicycles make more automatic decisions than cars? It's just a question of insufficiently effective enforcement and nothing more. There's nothing to make of it ideological. And even if you were right that they didn't test the bicycles enough, then let them test them. I'm talking about the principled question, assuming they tested them as much as they should.

4. Kant's categorical imperative is far from being algorithms as far as heaven and earth. On the contrary, the common criticism of it is that there is absolutely no guidance on what to do in practice because it still all depends on what I would like to have as a general law. I won't go into the stupid law of not standing for your neighbor's blood here. This is evidence to contradict. In any case, if you don't like liberal ethics but rather want religious ones (in my opinion, this is an oxymoron. Ethics in essence are universal, and so on) - be respectful and program your robot according to religious principles. There is no problem in principle with doing it like liberal ethics.
Excuse me, your last argument is really far-fetched. After all, if there are different systems of ethics among humans (I agree factually, although quite marginally), this shows that the problem exists even if you leave the decision to humans. How is this different from what a robot would do? This is an argument to contradict. You show that even if we don't mechanize it, we will be left with different behaviors. So what have we gained from preventing mechanization? You are essentially saying that there is no single correct ethic, and at the same time warning against using a robot because it might behave incorrectly.

Note that you implicitly assume here a very specific reference to moral decisions. In your opinion, they should be judged not by their consequences but only by the way humans made them. In other words, a decision is moral if a human made it, regardless of its consequences. This is an absurd and very far-reaching version of deontological morality. Deontology can be a condition for moral judgment of an act, but not an exclusive condition. For our purposes, of course, this is not important, because clearly what is important is the result. I am not interested in whether the robot is moral. What is important to me is that it does not kill people unjustifiably and that it makes correct decisions (consequentially).

Have a great day,

נ' replied 8 years ago

And there are (at least) two other very important considerations before we give up on the human agent in the military:
* The military space is the place where human societies experiment with technologies, which then migrate to the civilian space (and today the separation between the military and the civilian is smaller than ever). This transition is usually made “with the help of God”, meaning the name is changed, for example from drone to drone, but it is worth paying attention to this ’conversion’ phenomenon. So by agreeing to the army being inhuman, you are agreeing that you are willing to have humanoid robots put you in a car on the street (accompanied, of course, by an electronic arrest warrant issued by an algorithm, legally and lawfully), to evacuate Amona/Umm Hiran without human contact, to have the bank send robots to evict you from your house if you give up your assets, and so on. Will this reduce or increase bloodshed and fairness? It certainly will not promote transparency. None of this is science fiction. Their becoming an accepted reality, if it happens, will be the result of social conventions, which people like Brigadier General Avtalion are trying to change - although to his credit I believe he is unaware of the seriousness of his words and their implications.

* The human agent has disadvantages but also advantages from a moral point of view. But as we know from the Nazis' transition from direct fire to gas trucks, the problem of humans digesting killing is a shortcoming from the perspective of military and industrial "efficiency," but it is evidence that the same natural morality mentioned above causes humans to be willing to kill only when they are convinced that it is justified. Is this why the IDF is promoting all sorts of strange technologies today to engineer the consciousness of soldiers? Giving up the human agent would open the door to a very problematic reality, which I do not think will reduce injustices. The same is true for the macro dimension - today, going to war involves convincing the public that the cost in human lives justifies it. A war by remote control would eliminate this "obstacle."

דוקטור replied 8 years ago

The previous response was deleted, as was this one, and now I announce that all subsequent ones will also be deleted if they are worded in a disrespectful and disrespectful manner.

דוקטור replied 8 years ago

Our rabbi is humble as a hymn, and therefore answers and answers patiently and at length (in my opinion unnecessarily) to silly questions, but I envied the honor of a scholar. N’'s questions are appropriate for a science fiction forum or Moshe Rat's website. The pseudo-intellectual presentation of a problem that does not exist indicates to me a decadence of academia in some of the humanities from which the questioner comes.

מיכי Staff replied 8 years ago

Attached is a correspondence on the subject from the email:

Q: Computer Ethics: Who Will Educate the Learning Machines?
The development of computing power in recent years, alongside various learning algorithms (machine learning, deep learning, etc.) and neural network techniques, has created a huge development in the field of artificial intelligence (AI). More and more "learning" machines are entering our lives and replacing tasks that until recently we thought only humans could do.
Here's the rub: We may lose control of the machines because we have given them the ability to change their own software.
Example: About a year ago, Microsoft launched a learning program called Tay that was designed to help those who encountered problems with Microsoft's software with conversations. Tay was taken down from the network after 24 hours, after it learned conversational structures to curse, slur, and the like.
We humans are also a kind of "learning machine" and we (especially children) can also learn ways of bad behavior. However, the education system, parents, teachers and society teach us not to use bad behaviors. For this, we have "values". But what are the values of computers with learning programs? Who guarantees us that they will not destroy us one day, if they learn, for example, that we can disconnect them from the electricity?
Asimov tried to solve the problem using the three well-known laws of robotics, but these are very difficult (and perhaps even impossible) to implement.
Read an article from the New York Times this morning, which heralds the scientific effort that AI scientists are making today to deal with this problem:
https://drive.google.com/open?id=0BwJAdMjYRm7IMkFtUHNWSnVIRUU

Miki: I think these apocalypses are being exaggerated. How is this different from any technological tool that harms humans on a mission?
There is a similar correspondence on my site about autonomous cars
S’: The concern is about losing control
Miki: What is the difference between your machine that you have lost control of and a regular machine under full control of the enemy who is trying to eliminate you? Apart from irrational hysteria, I don't see any difference between the situations, except perhaps to the detriment of the AI (because in the second case there is a deliberate effort to eliminate you, not just an accidental loss of control).
S’: In the face of an effort to eliminate me, I have control over my means of defense.
Miki: And in the face of an effort by your machine to eliminate you. Or neither this nor that. Exactly the same.
S’: The concern is about losing control so that we don't know where
Miki: Just like you don't know where your enemy's means of attack will go.
Q: Against an enemy, I have good information about the means of attack (and not bad intelligence about his intentions) as well as control over the means of defense (see Iron Fist as an example), while with AI, the fear is of losing control over the information about where it develops and what it is plotting, and therefore the inability to defend itself.
Miki: Absolutely not.
The enemy can also use AI against you without losing control over it. And you also have intelligence about your AI, after all, you built it.
Q: This is exactly the fear that even though I built it, we will lose control over the intelligence about where it develops.
Miki: As mentioned, I don't see the slightest difference.

Leave a Reply

Back to top button