
In Memory Yet Green: The Autobiography of Isaac Asimov, 1920-1954 (1979) includes Asimov's recollection of how he and his editor at Astounding Science-Fiction, John W. Campbell Jr., came up with the Three Laws of Robotics during a conversation on Dec. 23, 1940, when Asimov was 20 years old.
Asimov said Campbell ticked them off on his fingers out of thin air; Campbell said he merely was summarizing the logical principles already inherent in Asimov's robot stories thus far. As eventually phrased, they are:
In his memoir, Asimov writes of the Three Laws:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I am probably more famous for them than for anything else I have written, and they are quoted even outside the science-fiction world. ...Besides the Three Laws, Asimov in his fiction actually coined the words "roboticist" (in "Strange Playfellow," Super Science Stories, September 1940) and "robotics" (in "Liar!," Astounding Science-Fiction, May 1941). He wrote in his memoir:
Once they were well established in a series of stories, they made so much sense and proved so popular with the readers that other writers began to use them. ...
I never minded that. On the contrary, I was flattered. Besides, no one could write a stupid robot story if he used the Three Laws. The story might be bad on other counts, but it wouldn't be stupid.
I didn't realize this until many years later, for at the time I first used the word [robotics], I thought it was a word actually used by scientists in this connection. It was, after all, analogous to "physics," "hydraulics," "mechanics," "statics," and various other words of this form used to denote a branch of physics-related science.Note that the Three Laws are not natural laws, such as gravity, but ethical laws that are hardwired into Asimov's fictional robot characters so they never threaten society -- though many plot-generating loopholes do crop up, in story after story.
The Three Laws are much discussed, and much criticized, among roboticists and artificial-intelligence researchers. David Woods and Robin Murphy, for example, recently have rewritten the laws to emphasize that real-life robots are not autonomous actors. As this Space.com article explains, Woods and Murphy argue the laws should bind not robots, but their human operators.
Amazingly, none of Asimov's fictional robots is yet in Carnegie Mellon's Robot Hall of Fame, though that honor has been given to a robot named ASIMO.
In the Space.com article above, Woods and Murphy talk about how they have "modified" Asimov's Three Laws. I do not think that they should be modified. We must remember that Asimov was a SF author. His Three laws are made under the assumption that robots will be autonomous in the future. His laws are an interesting concept for the future when/if autonomous robots can be created.
ReplyDeleteCurrently we are not even close to creating self governing robots. Creating a robot is like coding a computer program: the robot will only be able to do whatever you tell it to do. We have not been able to develop robots who can think, react, and account for various unexpected circumstances.
Woods and Murphy are not as much rewriting Asimov's Three Laws as they are proposing laws of robotics that we can use with today's current technology. We are in no position to put all responsibility on robots so it makes sense that the responsibilities should currently fall on the human operators.
Good clarification, Kenny. Thanks.
ReplyDeleteInteresting fact, the term "robots" was coined by Czech writer Karel Capek in his work "Rossum's Universal Robots". It was meant to mean "manual labor". The robota was the work period a serf had to give for his lord, typically 6 months of the year in Slavic nations. Isn't it interesting how the term has changed?
ReplyDelete