Thursday, February 22, 2018

Cambridge, Oxford Uni's, Electronic Frontier Foundation Report: "The Malicious Use of Artificial Intelligence:..."

It's all about risk.

First up, Engineering & Technology, Feb. 21: 

AI is a threat to global stability, warns Cambridge University report
Artificial intelligence (AI) could be used by rogue states to cause havoc and disruption, according to a new report from Cambridge University’s Centre for the Study of Existential Risk. 

In a report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the university body warns that malicious manipulation of AI could create a destabilising effect and calls on governments and corporations worldwide to ensure that this does not happen.
It also warns of the rise of “highly believable fake videos” impersonating prominent figures or faking events to manipulate public opinion around political events.

The 100-page report identifies three security domains (digital, physical and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted and highly-efficient attacks.
The authors expect novel cyber-attacks, such as automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning).

Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom....MORE
At the EFF, Feb. 21, their particular interest:

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
...At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI. At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems....  
From GigaOm (yes, they're still alive) some additional thoughts:

What’s missing from the Malicious Use of Artificial Intelligence report?
Only a fool would dare criticise the report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” coming as it does from such an august set of bodies — to quote: 
“researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy.”
Cripes, that’s quite a list. But let me at least try to summarize its 100 pages of dense text.
– There’s a handy executive summary and introduction
– 38 pages cover all the things that could go wrong
– 15 pages describe ways to not let them happen
– 33 pages cover the people and materials referenced
It’s difficult to argue with any of it, on the surface at least. Particularly the overall message: there could be bad things, and we should not sleepwalk into them. While this is welcome advice, one factor is noticeable by its absence. Strangely, as the report comes from groups for whom the scientific method should be as familiar as brushing one’s teeth in the morning, it lacks any discussion, or indeed conception, of the nature of risk.

Risk, as security and continuity professionals know, is a mathematical construct, the product of probability and impact. The report itself makes repeated use of the term ‘plausible’, to describe AI’s progress, potential targets and possible outcomes. Beyond this, there is little definition.

We can all conjure disaster scenarios, but it is not until we apply our expertise and experience to assessing the risk, that we can prioritise and (hopefully) mitigate any risks that emerge.
So, without this rather important element, what can we distil from its pages? First we can perceive the report’s underlying purpose, to bring together the dialogues of a number of disparate groups. “There remain many disagreements between the co-authors of this report,” it states, showing the reality, that it is a work in progress: to coin an old consultancy phrase, “I’m sorry their report is so long, we didn’t have time to make it shorter.”...MUCH MORE
And the report via the EFF (101 page PDF)