ender71
Ističe se
- Poruka
- 2.469
Problems of this sort have occurred to me now and then but I never felt moved to make one the basis of a story.
Consider— How would a robot define a human being in the light of the three laws. The First Law, it seems to me, offers no difficulty: “A robot may not injure a human being, or through inaction, allow a human being to come to harm.”
Fine, there need be no caviling about the kind of a human being. It wouldn’t matter whether they were male or female, short or tall, old or young, wise or foolish. Anything that can define a human being biologically will suffice.
The Second Law is a different matter altogether: “A robot must obey orders given it by a human being except where that would conflict with the First Law.”
That has always made me uneasy. Suppose a robot on board ship is given an order by someone who knows nothing about ships, and that order would put the ship and everyone on board into danger. Is the robot obliged to obey? Of course not. Obedience would conflict with the First Law since human beings would be put into danger.
That assumes, however, that the robot knows everything about ships and can tell that the order is a dangerous one. Suppose, however, that the robot is not an expert on ships, but is experienced only in, let us say, automobile manufacture. He happens to be on board ship and is given an order by some landlubber and he doesn’t know whether the order is safe or not.
It seems to me that he ought to respond, “Sir, since you have no knowledge as to the proper handling of ships, it would not be safe for me to obey any order you may give me involving such handling.”
Because of that, I have often wondered if the Second Law ought to read, “A robot must obey orders given it by qualified human beings...”
But then I would have to imagine that robots are equipped with definitions of what would make humans “qualified” under different situations and with different orders. In fact, what if a landlubber robot on board ship is given orders by someone concerning whose qualifications the robot is totally ignorant.
Must he answer, “Sir, I do not know whether you are a qualified human being with respect to this order. If you can satisfy me that you are qualified to give me an order of this sort, I will obey it.”
Then, too, what if the robot is faced by a child of ten—indisputably human as far as the First Law is concerned. Must the robot obey without question the orders of such a child, or the orders of a moron, or the orders of a man lost in the quagmire of emotion and beside himself?
The problem of when to obey and when not to obey is so complicated and devilishly uncertain that I have rarely subjected my robots to these equivocal situations.
And that brings me to the matter of aliens.
The physiological difference between aliens and ourselves matters to us—but then tiny physiological or even cultural differences between one human being and another also matter. To Smith and Campbell, ancestry obviously mattered; to others skin color matters, or gender or eye shape or religion or language or, for goodness sake, even hairstyle.
It seems to me that to decent human beings, none of these superficialities ought to matter. The Declaration of Independence states that “All men are created equal.” Campbell, of course, argued with me many times that all men are manifestly not equal, and I steadily argued that they were all equal before the taw. If a law was passed that stealing was illegal, then no man could steal. One couldn’t say, “Well, if you went to Harvard and were a seventh-generation American you can steal up to one hundred thousand dollars; if you’re an immigrant from the British Isles, you can steal up to one hundred dollars; but if you’re of Polish birth, you can’t steal at all.” Even Campbell would admit that much (except that his technique was to change the subject).
And, of course, when we say that “All men are created equal” we are using “men” in the generic sense including both sexes and all ages, subjected to the qualification that a person must be mentally equipped to understand the difference between right and wrong.
In any case, it seems to me that if we broaden our perspective to consider non-human intelligent beings, then we must dismiss, as irrelevant, physiological and biochemical differences and ask only what the status of intelligence might be.
In short, a robot must apply the Laws of Robotics to any intelligent biological being, whether human or not.
Naturally, this is bound to create difficulties. It is one thing to design robots to deal with a specific non-human intelligence, and specialize in it, so to speak. It is quite another to have a robot encounter an intelligent species whom it has never met before.
After all, different species of living things may be intelligent to different extents, or in different directions, or subject to different modifications. We can easily imagine two intelligences with two utterly different systems of morals or two utterly different systems of senses.
Must a robot who is faced with a strange intelligence evaluate it only in terms of the intelligence for which he is programmed? (To put it in simpler terms, what if a robot, carefully trained to understand and speak French, encounters someone who can only understand and speak Farsi?)
Or suppose a robot must deal with individuals of two widely different species, each manifestly intelligent. Even if he understands both sets of languages, must he be forced to decide which of the two is the more intelligent before he can decide what to do in the face of conflicting orders—or which set of moral imperatives is the worthier?
Someday, this may be something I will have to take up in a story but, if so, it will give me a lot of trouble. Meanwhile, the whole point of the Robot City volumes is that young writers have the opportunity to take up the problems I have so far ducked. I’m delighted when they do. It gives them excellent practice and may teach me a few things, too.
Consider— How would a robot define a human being in the light of the three laws. The First Law, it seems to me, offers no difficulty: “A robot may not injure a human being, or through inaction, allow a human being to come to harm.”
Fine, there need be no caviling about the kind of a human being. It wouldn’t matter whether they were male or female, short or tall, old or young, wise or foolish. Anything that can define a human being biologically will suffice.
The Second Law is a different matter altogether: “A robot must obey orders given it by a human being except where that would conflict with the First Law.”
That has always made me uneasy. Suppose a robot on board ship is given an order by someone who knows nothing about ships, and that order would put the ship and everyone on board into danger. Is the robot obliged to obey? Of course not. Obedience would conflict with the First Law since human beings would be put into danger.
That assumes, however, that the robot knows everything about ships and can tell that the order is a dangerous one. Suppose, however, that the robot is not an expert on ships, but is experienced only in, let us say, automobile manufacture. He happens to be on board ship and is given an order by some landlubber and he doesn’t know whether the order is safe or not.
It seems to me that he ought to respond, “Sir, since you have no knowledge as to the proper handling of ships, it would not be safe for me to obey any order you may give me involving such handling.”
Because of that, I have often wondered if the Second Law ought to read, “A robot must obey orders given it by qualified human beings...”
But then I would have to imagine that robots are equipped with definitions of what would make humans “qualified” under different situations and with different orders. In fact, what if a landlubber robot on board ship is given orders by someone concerning whose qualifications the robot is totally ignorant.
Must he answer, “Sir, I do not know whether you are a qualified human being with respect to this order. If you can satisfy me that you are qualified to give me an order of this sort, I will obey it.”
Then, too, what if the robot is faced by a child of ten—indisputably human as far as the First Law is concerned. Must the robot obey without question the orders of such a child, or the orders of a moron, or the orders of a man lost in the quagmire of emotion and beside himself?
The problem of when to obey and when not to obey is so complicated and devilishly uncertain that I have rarely subjected my robots to these equivocal situations.
And that brings me to the matter of aliens.
The physiological difference between aliens and ourselves matters to us—but then tiny physiological or even cultural differences between one human being and another also matter. To Smith and Campbell, ancestry obviously mattered; to others skin color matters, or gender or eye shape or religion or language or, for goodness sake, even hairstyle.
It seems to me that to decent human beings, none of these superficialities ought to matter. The Declaration of Independence states that “All men are created equal.” Campbell, of course, argued with me many times that all men are manifestly not equal, and I steadily argued that they were all equal before the taw. If a law was passed that stealing was illegal, then no man could steal. One couldn’t say, “Well, if you went to Harvard and were a seventh-generation American you can steal up to one hundred thousand dollars; if you’re an immigrant from the British Isles, you can steal up to one hundred dollars; but if you’re of Polish birth, you can’t steal at all.” Even Campbell would admit that much (except that his technique was to change the subject).
And, of course, when we say that “All men are created equal” we are using “men” in the generic sense including both sexes and all ages, subjected to the qualification that a person must be mentally equipped to understand the difference between right and wrong.
In any case, it seems to me that if we broaden our perspective to consider non-human intelligent beings, then we must dismiss, as irrelevant, physiological and biochemical differences and ask only what the status of intelligence might be.
In short, a robot must apply the Laws of Robotics to any intelligent biological being, whether human or not.
Naturally, this is bound to create difficulties. It is one thing to design robots to deal with a specific non-human intelligence, and specialize in it, so to speak. It is quite another to have a robot encounter an intelligent species whom it has never met before.
After all, different species of living things may be intelligent to different extents, or in different directions, or subject to different modifications. We can easily imagine two intelligences with two utterly different systems of morals or two utterly different systems of senses.
Must a robot who is faced with a strange intelligence evaluate it only in terms of the intelligence for which he is programmed? (To put it in simpler terms, what if a robot, carefully trained to understand and speak French, encounters someone who can only understand and speak Farsi?)
Or suppose a robot must deal with individuals of two widely different species, each manifestly intelligent. Even if he understands both sets of languages, must he be forced to decide which of the two is the more intelligent before he can decide what to do in the face of conflicting orders—or which set of moral imperatives is the worthier?
Someday, this may be something I will have to take up in a story but, if so, it will give me a lot of trouble. Meanwhile, the whole point of the Robot City volumes is that young writers have the opportunity to take up the problems I have so far ducked. I’m delighted when they do. It gives them excellent practice and may teach me a few things, too.
)obe knjige za jedan dan.