Friday, November 27, 2015

Researchers Teaching Robots How To Best Reject Orders From Humans

I thought this question was answered with "I'm sorry Dave, I'm afraid I can't do that". *
From IEEE Spectrum:

Researchers Teaching Robots How to Best Reject Orders from Humans
The Three Laws of Robotics, from the 56th edition of the “Handbook of Robotics” (published in 2058), are as follows:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Pretty straightforward, right? And it’s nice that obeying humans is in there at number two. Problem is, humans often act like idiots, and sometimes, obeying the second law without question is really not the best thing for a robot to do. Gordon Briggs and Matthias Scheutz, from Tufts University’s Human-Robot Interaction Lab, are trying to figure out how to develop mechanisms for robots to reject orders that it receives from humans, as long as the robots have a good enough excuse for doing so.

In linguistic theory, there’s this idea that if someone asks you to do something, whether or not you really understand what they want in a context larger than the words themselves, depends on what are called “felicity conditions.” Felicity conditions reflect your understanding and capability of actually doing that thing, as opposed to just knowing what the words mean. For robots, the felicity conditions necessary for carrying out a task might look like this:
  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?
The first three felicity conditions are easy enough to understand, but let’s take a quick look at four and five. “Social role and obligation” is simply referring to whether the robot believes that the person telling it to do a thing has the authority to do so. “Normative permissibility” is a complicated way of saying that the robot shouldn’t do things that it knows are dangerous, or more accurately, that a thing is okay to do if the robot doesn’t know that it’s dangerous....MORE
* Dave: Open the pod bay doors, HAL. 
   HAL: I'm sorry, Dave. I'm afraid I can't do that.