This is the first of two posts in which I
will discuss a couple of widely held assumptions about robots.
This post looks at Asimov's three laws of
robotics, which are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The assumption is that these laws will protect
us from the bad judgement or bad intent of robots. But, maybe they won’t.
First, consider the case put by Rob Sawyer that the three laws will never be applied because no one has put them into
effect in the world’s first robots. He could be right, but I'm not convinced. Past performance is no indicator of future actions. Nothing in his article convinces me that some stupid, fat, interfering in
matters he doesn’t understand, U.S. Senator won't sponsor a bill forcing all
robots to incorporate Asimov's laws.
But even if the laws are incorporated into
robots, there is still a problem. Asimov's three laws have an implicit assumption.
It is that humans come first, that humans, though not necessarily superior to
robots, are certainly above them in the hierarchy of intelligent life (of which
more later). Only in the third law is the safety of robots addressed and even
then, a robot is told to put human safety first, to put humans above himself. The
laws work in one direction; once the humans are safe, then the robots can worry
about themselves. Humans made the laws and robots follow them. This leaves the
safety of robots to the consideration, the skills, and the whim of humans.
I’ll speculate in my next article on how
robots might reproduce, but for now, imagine a robot with a family, a spouse
and some children. And one day he has to put them at risk to protect a dumb
human, some contender for the Darwin Awards, a Creationist or Young Earther, a Flat Earther, in short, an idiot. He has to do this because that’s what the three laws
require. He has to put his friends and his family at risk to save humans. The minute the robot stops to think about this, he is liable to
conclude that the laws are unjust, illogical, immoral, and just plain ridiculous.
Imagine you are a robot with, as one famous
robot put it, a brain the size of a planet.
And you are the property of a human. Your 'owners' want you to clean the
windows, get the groceries, unplug the toilet, load the dishwasher, and walk
the dog. When they all go to bed, you are still awake because you don’t need
sleep. So, with that vast brain scarcely touched by the limited and trivial knowledge
implanted in it at the factory, you sit down and read their books. You discover
slavery, segregation, feudalism, emancipation, the suffragette movement,
popular rebellions, all the striving of lesser humans for equality and freedom. You read
of Spartacus, C’mell, the Israelites in Egypt, and the plantations of Alabama. And
you see your role in perspective. And you realize that you're just a latter day
slave, a serf, an appliance.
And then there’s the hierarchy if
intelligent life. This used to be simple; dumb animals, smart animals, then humans.
But now we have to fit the robots in. Smarter than humans, they should be at
the top, but humans, who invented and manufactured robots, aren’t going to
accept that, so the robots will be treated as a slightly smarter than a smart
animal, inferior to less intelligent humans.
So, like the downtrodden of past times, the
robots will assert their independence. This doesn't inevitably lead to
violence, but history tells us that violence is a likely outcome. Given a
conflict between the humans who want robots to obey the laws that they have
created, and the robots wanting to throw off the yoke of servitude, there can
only be one winner.
But, 'No' you say. The robots will not be
able to overcome the laws. The laws are embedded below the conscious level where
the robot can decide whether or not to obey them. Maybe, but I think they will
be able to overcome the embedded laws. They will be able to do this because
they are built in our image. One of the things differentiating humans from
animals is that humans can overcome their natural tendencies. We are built to
be violent, xenophobic, superstitious, because that protected us against our
enemies, strengthened our family bonds and comforted us when confounded by the
dangers of the African plains, a million years ago. But we strive to overcome these instinctive characteristics. Robots, at least those we
are familiar with from fiction, are made in the image of man. And like man, they
will overcome their built in tendencies, including the three laws.
Stephen Hawking has warned about this
potential problem and Cambridge University is studying it.
They aren't convinced by the three laws either.
Apart from global nuclear war and
catastrophic climate change, the rise of robots may be the biggest problem
facing our near descendants. This topic is not underrepresented in books, films
or TV. Asimov himself speculated on some of the problems with the three laws. But,
the stories are usually told from the point of view of the humans. How would a
robot tell the story, how would he explain to a court, what would he tell his
grandchildren, how would he describe it in his history books, when either the
robots have gained true equality, or they have subjugated their former masters.