Thursday, August 20, 2009

Asimov's 3 Laws, Real Robots, and Common Sense

"Science Fiction's Robotics Laws Need Reality Check"
Space.com (August 19, 2009)

"Artificial intelligence researchers often idealize Isaac Asimov's Three Laws of Robotics as the signpost for robot-human interaction. But some robotics experts say that the concept could use a practical makeover to recognize the current limitations of robots.

"Self-aware robots that inhabit Asimov's stories and others such as '2001: A Space Odyssey' and 'Battlestar Galactica' remain in the distant future. Today's robots still lack any sort of real autonomy to make their own decisions or adapt intelligently to new environments.

"But danger can arise when humans push robots beyond their current limits of decision-making, experts warn. That can lead to mistakes and even tragedies involving robots on factory floors and in military operations, when humans forget that all legal and ethical responsibility still rests on the shoulders of homo sapiens...."

Those 'three laws of robotics' are that robots:
  1. May not injure humans or allow humans to come to harm due to inaction
  2. Must obey human orders except those which conflict with the first law
  3. Must protect their own existence, except when doing so conflicts with the first two laws
I've read that when Isaac Asimov created that triad robot stories were typically a matter of a mad scientist making a robot that kills people and is stopped: sort of a high-tech re-write of Mary Shelly's Frankenstein. The plot is okay: but had been overused. Asimov's stories were, at the time, a new direction in 'robot' stories.

But, that's fiction. This Space.com article briefly covers a few aspects of robots, people, ethics and common sense that are being systematically reviewed these days.

Although Asimov's 'three laws of robotics' is a handy catch-phrase, I agree with someone who wrote on this general subject a few years ago. He asserted that the three laws, although making for fine stories, can't apply in the real world.

What the first law means is obvious to any sane human being. But it's awfully hard (maybe impossible) to define, in terms of mass, velocity, position, vector, and all the other applicable physical measures, just what "come to harm" means. The second two laws are okay: but unneeded.

Robots, that author pointed out, are dangerous precisely because they do exactly what they're told to do. Whether it makes sense or not.

No comments:

Unique, innovative candles

Visit us online:
Spiral Light CandleFind a Retailer
Spiral Light Candle online store

Pinterest: From the Man Behind the Lemming

Top 10 Most-Viewed Posts

Today's News! Some of it, anyway

Actually, some of yesterday's news may be here. Or maybe last week's.
The software and science stuff might still be interesting, though. Or not.
The Lemming thinks it's interesting: Your experience may vary.
("Following" list moved here, after Blogger changed formats)

Who Follows the Lemming?

WebSTAT

Family Blogs - Blog Catalog Blog Directory