Space.com (August 19, 2009)
"Artificial intelligence researchers often idealize Isaac Asimov's Three Laws of Robotics as the signpost for robot-human interaction. But some robotics experts say that the concept could use a practical makeover to recognize the current limitations of robots.
"Self-aware robots that inhabit Asimov's stories and others such as '2001: A Space Odyssey' and 'Battlestar Galactica' remain in the distant future. Today's robots still lack any sort of real autonomy to make their own decisions or adapt intelligently to new environments.
"But danger can arise when humans push robots beyond their current limits of decision-making, experts warn. That can lead to mistakes and even tragedies involving robots on factory floors and in military operations, when humans forget that all legal and ethical responsibility still rests on the shoulders of homo sapiens...."
Those 'three laws of robotics' are that robots:
- May not injure humans or allow humans to come to harm due to inaction
- Must obey human orders except those which conflict with the first law
- Must protect their own existence, except when doing so conflicts with the first two laws
But, that's fiction. This Space.com article briefly covers a few aspects of robots, people, ethics and common sense that are being systematically reviewed these days.
Although Asimov's 'three laws of robotics' is a handy catch-phrase, I agree with someone who wrote on this general subject a few years ago. He asserted that the three laws, although making for fine stories, can't apply in the real world.
What the first law means is obvious to any sane human being. But it's awfully hard (maybe impossible) to define, in terms of mass, velocity, position, vector, and all the other applicable physical measures, just what "come to harm" means. The second two laws are okay: but unneeded.
Robots, that author pointed out, are dangerous precisely because they do exactly what they're told to do. Whether it makes sense or not.
No comments:
Post a Comment