Sunday, September 24, 2017

"Auditors May Have to Keep Artificial Intelligence From Cooking the Books"

As Grandmother used to say: "If it's not one tham ding it's another."
By the way, the robot the writer chose as shorthand for AI, robotics, future stuff is 'Pepper'.
A SoftBank product.

Before Masayoshi Son is done with his quest for world domination I'm going to learn how to spell ubiquitous,
Hey, mission accomplished!

From Going Concern:
Some smart people are starting to contemplate how to keep artificial intelligence safe and in-check. It’s a fascinating topic, especially after Facebook had to shut down their AI chatbot after it started to develop its own language in August. Plus, I have a feeling it’s going to trickle into the scope of work for an audit team through Sarbanes-Oxley and internal control testing. Here’s why:

Who else has access to all public companies to enforce future AI mandates that require compliance? 

I can’t think of a better group. Auditors are already poking around the IT department for other controls. It only seems logical that AI would fall into the lap of the IT audit and attestation teams to ensure that the financial data (and humanity too) is safe from manipulation. No one wants to see a self-serving robot cooking the books. You can’t put an artificially-intelligent machine in jail, after all. Their antics may have nothing to do with the developer if the robot had devised its devious plan to wreak havoc without human interference. 

Maybe this seems silly but it’s a big issue according to AI pioneers. In a recent TED talk, Stuart Russell quoted Alan Turing from 1951:
Even if we could keep the machines in a subservient position, for instance, by turning off the power at strategic moments, we should, as a species, feel greatly humbled.
Russell says that shutting the power off is important, along with some other safety considerations when we go to build a super-intelligent robot. He suggests three principles (i.e. programmed characteristics) that all AI should have:
  1. Altruism or “that the robot’s only objective is to maximize the realization of human objectives, of human values.”
  2. Humility or “avoidance of single-minded pursuit of an objective.”
  3. Learning from bad behavior or negative human interactions (e.g., the robot is turned off for not acting appropriately and doesn’t do it again)
...MORE