Pages

Thursday, September 1, 2011

A conflict in the mind of a Robot

0 comments
Author: Siva Sankar, CEG, Anna University

Contradictions between the three basic laws can make a robot go crazy. There will always be bugs in robot consciousness that might eventually result in their breakdowns. What is  so profound and interesting is their minds emerging from a series of commands that often contradict one another. Robot minds work differently from humans. What if there was a hitch in the assembly and a positronic brain of the robot is able to  read minds and understand feelings, then in such a case what if we ask the robot for an answer which might hurt our feeling. There’ll be irony falling into an elementary trap, won’t there be? But it won’t be Funny.  Well what trap are we talking about? Is something wrong with the robot? No nothing will be wrong with them— only with us. Surely you know the fundamental First Law of Robotics. Certainly, “a robot may not injure a human being or, through inaction, allow him to come to harm”

How nicely put but what kind of harm? Any kind! What about hurt feelings? What about deflation of one’s ego? What about the blasting of one’s hopes? Is that injury? But What would a robot know about feelings. You’ve caught on, haven’t you? What if this robot can read the mind? Do you suppose it doesn’t know everything about mental injury? Do you suppose that if asked a question, it wouldn’t give exactly that answer that one wants to hear? Wouldn’t any other answer hurt us, and wouldn’t that robot know that? Only that you didn’t want him to give you the solution. It would puncture your ego to have a machine do what you couldn’t. Why doesn’t it answer? It cannot. You will not want it to. You want the solution but not from the robot.

What’s the use of saying that it will not hurt u? Don’t you suppose that it can’t see past the superficial skin of your mind? Down below, you don’t want it to.  You can’t lose face to it without being hurt. That is deep in your mind and won’t be erased. It can’t give the solution.
But also the fact that it has the solution and won’t give it hurts us. 

And if the robot tells us the solution that will hurt us, too The robot can’t tell us because that would hurt and the robot mustn’t hurt. But if the robot doesn’t tell us, the robot hurts, so the robot must tell us. And if the robot do, the robot will hurt and the robot mustn’t, so the robot can’t tell us; but if the robot don’t, the robot hurts, so the robot must; but if the robot do, the robot hurt, so the robot mustn’t; but if the robot don’t, the robot hurt, so the robot must; but if the robot does, the robot–
 It will be confronted with the insoluble dilemma and it will eventually break down. We will have nothing left but to scrap him now because he’ll never be able to reach a decision.