TCS Daily


Robot Rights

By Glenn Harlan Reynolds - October 29, 2003 12:00 AM

"Robots are people, too! Or at least they will be, someday." That's the rallying cry of the American
Society for the Prevention of Cruelty to Robots
, and it's beginning to become a genuine issue.

 

We are, at present, a long way from being able to create artificial intelligence systems that are as good as human minds. But people are already beginning to talk about the subject (the U.S. Patent Office has already issued a -- rather dubious -- patent on ethical laws for artificial
intelligences, and the International Bar Association even sponsored a mock trial on robot rights last month).

 

More recently, blogger Alex Knapp set off an interesting discussion of the subject on his Heretical Ideas weblog. Knapp cited Asimov's famous Laws of Robotics:

 

First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.


Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.


Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

Then he asked whether it would be moral to impose such laws on an intelligence that we created. Wouldn't we be creating slaves? And, if so, wouldn't that be bad? (Here, by the way, is a fascinating look at the programming problems created by Asimov's Laws).

 

Knapp's questions raises questions that go beyond the animal rights and human rights debates. Human slavery is generally regarded as bad because it denies our common humanity. Robots, of course, don't possess "humanity" unless we choose to design it into them -- or, at least, leave it possible for them to develop it, a la Commander Data, on their own. Do we have an obligation to do so?

 

Animal rights activists, by contrast, generally invoke Jeremy Bentham's concept of suffering: "The question is not, 'Can they reason?' nor 'Can they talk?' but 'Can they suffer?'" Under this approach, it's the ability to feel subjective pain that determines the presence of rights.

Not everyone agrees with this viewpoint, by any means, but are we obliged to create machines that are capable of suffering? Or to refrain from programming them in ways that make them happy slaves, unable to suffer no matter how much they are mistreated by humans? It's hard for me to see why that might be the case. A moral duty to allow suffering seems rather implausible.

Immanuel Kant thought that our treatment of animals should be based on the kinds of behavior toward humans that cruelty to animals might encourage -- but, again, it's hard to see how that sort of reasoning applies to machines. One might judge a man who neglects his car foolhardy, but only some of us would think of such behavior as cruel. And it seems unlikely that cruelty toward automobiles, or robots, might lead to cruelty toward humans -- though I suppose that if robots become humanlike, that might change.

In response to Knapp's question, Dale Amon -- who has actual robotics research experience -- observes:


If we build rules into a mobile robot to limit its capabilities we are doing nothing more to it than putting a governor on an automobile engine or programming limitations into a flight control system. A 21st Century robot will not be a person, it will be a thing, an object.

But even Amon suggests that "true machine intelligences," which may include both evolved artificial intelligences and downloaded human minds, should be treated as citizens. Fair enough. But do we have an obligation to allow machine intelligences to evolve into human-like
minds?

I don't think so. I'm not sure where such an obligation would come from. But, reading the comments to Knapp's and Amon's posts, it seems clear that views on this subject vary rather widely. It should make for interesting discussion, and I'm glad that people are talking about
it now.

Categories:
|

TCS Daily Archives