Wednesday, December 16, 2015

HBR: Is Musk-Thiel's OpenAI Solving the Wrong Problem?

From the Harvard Business Review:
Late last week, OpenAI was announced — a non-profit artificial intelligence research company, backed by a set of tech-industry stars that include Elon Musk, Reid Hoffman, Jessica Livingston, Sam Altman, and Peter Thiel. (As well as some funding from Amazon Web Services.) Collectively, they’ve committed over $1 billion dollars to the venture. Why their interest? The founding document makes mention of the huge upside of AI to humanity — but even more so the downside, should AI be abused. Given that potential range of outcomes, it’s wonderful that there is such a smart (and well financed) group of people being so thoughtful on the subject. But there’s one theme in the rationale for the creation of the venture that really stands out:

“Since our research is free from financial obligations, we can better focus on a positive human impact.”

Implicit in this: You can do more good operating outside the bounds of capitalism than within them.

Coming from folks who are at the upper echelons of the system, it’s a pretty powerful statement.
Of course, Silicon Valley’s distaste for 20th-century capitalism is no secret. Many of the leading lights of the technology world  — Facebook, Google, LinkedIn — have “hacked” their capital structures so as to allow the public markets to come along for the financial ride, without allowing public investors any say in the way the companies are run. Control is maintained by the founders.
But while such hacking might have left the founders alone at the helm, it’s obviously not gone so far as to free their organizations from financial obligations altogether.

And yet something about the work that OpenAI is focused on has made the founders think that this time, there’s too much at stake to risk those same “financial obligations.” Perhaps it’s AI’s stage of development: it’s so nascent, that to introduce the profit motive at this point would slow its development. But surely the same argument could be made about the numerous other ventures these same people are involved in?

In fact, we don’t need to guess, because the OpenAI founders have talked publicly about it: yes, it’s in part to attract the best talent in the field; but what stood out even more in the opening press release and the subsequent interview Musk and Altman did with Steven Levy was the threat that AI, should it be misused, could have on humanity. Musk tweeted last year about how AI is “potentially more dangerous than nukes”; and in the Levy interview, he states, “AI safety has been preying on my mind for quite some time”.

So the question then becomes: Will housing such a research institute inside a not-for-profit company really result in us being any safer?
I’m not sure it will.

One of the aims of OpenAI is to share their work with the world (for example, they have committing to sharing any patents they might develop). A noble goal. But from there, who do you think is going to be best able to exploit whatever is developed? I wouldn’t claim to be an expert in the field of AI, but I’d hazard a guess that it’s going to be those with the most computational resources and, increasingly, proprietary data to dedicate to it.

And who is that likely to be?

My money is on those big, for-profit corporations that the founders of OpenAI seem to be more than a little concerned about....MORE