MIT Technology Review discusses to a recent article entitled "Do we need Asimov laws" by Ulrike Barthelmess and Ulrich Furbach from University Koblenz. The Asimov laws celebrate now 50 years and this triggered some discussion. For anybody who forgot (or never read Asimov - my most favorite author), the three laws of Asimov are:
1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2. A robot must obey the orders given to it by human beings, except
where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Laws.
Actually in a later book Asimov made an exception and defined a zeroth law that puts the benefit of humanity above these laws.
In the science fiction literature, a notable resistance to these laws are shown in the trilogy by Roger Mcbride Allen in the trilogy started with Caliban.
The article discusses that the three laws were dealing with different fears of people from the concept of robots, and asserts that the three laws were not implemented in reality neither in the autonomous vehicles projects nor in other robotic settings. Furthermore there were other claims that Asimov's three laws cannot protect us. Today robots also used for military purposes, and thus are by definition contradict Asimov's ideal about Robots as peace generators. The authors set a moral principle:
"It is not allowed to build and to use robots which violate Asimov’s first law!"
Actually a counter opinion is that it is better to jeopardize a robot than to jeopardize a human in combat situation, the implementation of this moral law has nothing to do with robots, it has to do with the culture of settling disputes in violent way, this is what should be eliminated - but this is another story!