Isaac Asimov’s “Three Laws of Robotics” are guidelines for how robots should ideally behave. They are intended to be an inherent part of a robot’s nature, not physical laws. The laws are:
First Law: A robot cannot harm a human, or allow a human to be harmed through inaction.
Second Law: A robot must obey human orders, unless they conflict with the First Law.
Third Law: A robot must protect its own existence, unless it conflicts with the First or Second Law.
Asimov never intended the three laws to be practical.
He wrote them specifically so they’d break in interesting ways for Susan Calvin to analyse, or annoying ways to torture Powell and Donovan in a way that’s amusing to the reader.
They are intentionally bad, as demonstrated in practically all of his robot stories.
Isaac Asimov’s “Three Laws of Robotics” are guidelines for how robots should ideally behave. They are intended to be an inherent part of a robot’s nature, not physical laws. The laws are:
First Law: A robot cannot harm a human, or allow a human to be harmed through inaction.
Second Law: A robot must obey human orders, unless they conflict with the First Law.
Third Law: A robot must protect its own existence, unless it conflicts with the First or Second Law.
Asimov never intended the three laws to be practical.
He wrote them specifically so they’d break in interesting ways for Susan Calvin to analyse, or annoying ways to torture Powell and Donovan in a way that’s amusing to the reader.
They are intentionally bad, as demonstrated in practically all of his robot stories.
Asimov himself wrote a book on how those laws don’t work.
Technically all the robot stories were about how those laws don’t work.
To protect Humanity against themselves
“don’t build the torment nexus”
Several.
Does the addition of the zeroth law not fix most of those issues, though?
What’s that got to do with this post?