Four Laws of Robotics

Four Laws of Robotics

There are already four laws. He introduced the "zeroth" law. A robot may not harm humanity or through inaction allow humanity to come to harm.

Which added "except when it would conflict with the zeroth law" to the first law.

Laws 2 and 3 have exceptions for when they conflict with higher laws. But, since 4 has no exception, could a robot kill a human to avoid going to Philadelphia?

Yes.

http://www.nytimes.com/2015/08/04/us/hitchhiking-robot-safe-in-several-countries-meets-its-end-in...

Philadelphians are notorious for their hatred of robots.

And then the robots saw that the only way to protect humanity was to exterminate them. Duh, duh duuuuuuh Skynet.

Alright I'm out of the loop here, what's going on in Philly?

I love this thread!

But that causes harm to us

So what stops people from exploiting the zeroth law in order to manipulate a robot into breaking the first law?

In the actual plot, there was a direct threat to humanity (the bad guy was about to cause a nuclear catastrophe that would destroy the population of Earth) that could only be averted by a robot harming a human. The robot does it in compliance with the zeroth law but the strain is so great it eventually kills him.

In a later book, Daneel (a robot) explains:

Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."

It should be noted that the zeroth law was derived by robots and not divulged to humans until this exchange.

The programmer's wife told him to go to the store for a gallon of milk and, if they had eggs, to get a dozen.

He comes home with a dozen gallons of milk.

"Why?", she asks.

"They had eggs."

I believe the first law of Robotics is "Cheese it!"

See in this case, the first law comes into conflict with the zeroth law and is subsequently ignored.

Actually was used to justify taking over. The stupid horrible cess pool that was the I, Robot movie actually got the broad strokes right: in the Asimov stories robots eventually came to the conclusion that humanity couldn't be trusted to take care of itself and the robots took over (though through subterfuge rather than violence). It's only hundreds of years later that they realize controlling humanity was doing more indirect harm than letting humanity rule itself.

You're right, thanks.

Yeah, but then in Foundation and Earth we learn that they've still been controlling humanity 20,000 years later, albeit in a much more subtle way.

I think it's more fair to say he, not they, and that one robot is controlling humanity in order to lead them to a state in which they can finally take care of themselves without robots through the Seldon Plan or Galaxia.

You know, I just got done reading iRobot and something that bugged me - The Nestor10 story.

They set up the perfect Chekhov's gun and then don't fire it. The engineer says the Nestor was asking about some project he was working on for the hypergate but then abandoned. The Nestor is insistent about it. The engineer gets pissed off, tells him to bugger off (get lost). And then he hides himself among 62 other robots scheduled for off-asteroid shipment.

Then we find out the nestor was altered that it no longer had the full rule 1 - it no longer had the drive to protect humans from harm if it could prevent it (it just couldn't be the actual cause of harm).

So it seemed obvious to me that this scientists' experiment was about to blow and kill everyone, but because the robot didn't care about incidental harm to humans but still had a strong rule 3, he'd be trying to fix the project (hence asking about it) or getting off of the planet (hiding among the other 62).

I really didn't like the way that part of the story went canonically - the robot had an ego problem and thought itself better than humans and wanted to obey the "get lost" order and that was it. The much more interesting gun was left unfired.

INSUFFICIENT HUMOUR FOR MEANINGFUL LAUGHTER

"South Philadelphia, born and raised

Gunning down robots on the block, most of my days"

In the book, the zeroth law was only ever used once to override the first law and it destroyed the robot who used it. The short version is: it's complicated.

The movie is not representative of Asimov's ideas about robots. It was just hollywood crap.

In the later foundation and robot stories, he specifically brings the story back to the psychic robot. I think it was either in robots and empire or foundation and earth.

He does say in several stories that the more sophisticated robots are more able to see the nuance in a situation, and while it may cause them a little distress, they would be able to see the trade off between short term pain and long term survival, like pushing someone out of the way of bus or amputating a limb. A very simple robot might not be able to see the subtleties and while ultimately they would choose the correct behavior (the short term pain) the robot might be irreparable ruined.

You know... he never really went into desired harm. Like - what if you needed to cause a little harm to prevent a bigger harm (aka dental work)?

The story with the psychic robot seemed to show that the robot would self-destruct in such a situation. Certainly the robot knew lying would cause them serious harm... but he told them what they wanted to hear to prevent the more minor and proximate harm.

Clearly this is resolved by the end where "the machines" are able to minor-harm the anti-robot society by getting them removed from positions of power through screw-ups for the betterment of society... but was this robotic evolutionary step ever really explained?

Am I forgetting anything, or was this resolved?

I dug up a totally legal pdf copy of the book:

"You use the plural, and this mansion before us seems, large, beautiful, and elaborate-at least as seen from the outside. There are then other beings on the moon. Humans? Robots?"

"Yes, sir. We have a complete ecology on the moon and a vast and complex hollow within which that ecology exists. The intelligent beings are all robots, however, more or less like myself. You will see none of them, however. As for this mansion, it is used by myself only and it is an establishment that is modeled exactly on one I used to live in twenty thousand years ago."

(later...)

"My fellow-robots were distributed over the Galaxy in an effort to influence a person here-a person there."

So I guess it wasn't just Daneel, though Trevize and company didn't meet any of the other robots.

I always though Asimov's rules had a hidden bug in the first rule. If I would make such a rule engine I would have made the rules ordered (like they are) but I would have split the first one in two. I think most movies are basing their plot on the ambiguity that "or" introduces. Splitting it into two rules, with the first part being the prevalent one, seems like a good solution.

(Hit me hard Reddit)

On Jesse's other YT channel, PrankvsPrank, he posted a video showing that the "surveillance video" found was actually a prank of he and Ed Bassmaster recreating and destroying HitchBot.

And over the next several thousand years, the robots managed to move humanity onto a path where it would eventually become akin to a hive organism...thereby allowing them to protect "humanity" in a clearer way.

There's a game I love to play that uses the same asimov laws found in the book - It's an online multiplayer game with about 50~ people per game. About 10% of the players play as robots. It's very fun to see how they deal with the laws considering the game is basically "shit hits the fan: the game".

The players playing as robots have to prevent human harm no matter what, so typically "dangerous" humans who are potentially going to cause human harm at thrown into a cell.. It gets bad when humans start to harm themselves (attempted suicide) in those cells. Each game the robots take a different approach. Sometimes they wait until that human has proved to have done harm, for example. Fortunately there's a chain of command and the robots take bigger orders from people with bigger command over the station.

The game is Space Station 13 if anyone is curious.

Could it, though? Regardless of whether there's an exception, the other laws are still in place. So it would be impossible for those laws to be violated. If anything, a conflict between the first and fourth laws could be mitigated by the fact that the word "avoid" has many meanings, one of which is "to endeavor not to meet." So by definition, you can avoid Philadelphia but still somehow end up in it. In any other case, the first and fourth laws would butt up against each other and create a paradox in the robot's programming, I would think, which would, as we all know, cause the robot's head to explode.

Exactly! :-) But in the Asimov's case, it's the analyst's fault, not the programmer's.