People Should Be in Charge of Their Data (Thanks, EU)
A clash between European Union bureaucracy and artificial intelligence is a plot worthy of a cyberpunk thriller. It will take place in real life in 2018, once some European data protection laws, passed earlier this year, go into effect. And, though we might instinctively be tempted to endorse progress over regulation, the EU is on the side of the angels in this battle.
The EU's General Data Protection Regulation and a separate directive contain provisions to protect people against decisions made automatically by algorithms. "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her," is how the regulation puts it. The directive, which will regulate police and intelligence work, is even tougher, prohibiting the use of data on ethnicity, religion political views, sexual orientation or union membership to use any automated decisions.
The idea is that, as the regulation's preamble says, the processing of personal data should be "subject to suitable safeguards." In real-life terms, this means that if a bank denies a loan based on the algorithmic processing of a person's data, an insurance company sets a high premium or a person is singled out for special police attention as a result of some blanket data-gathering operation like those Edward Snowden revealed at the National Security Agency, people should be able to challenge these decisions and have a human look into them. They should also be able to demand an explanation of why the decision was made.
Wired magazine -- relying in part on a recent paper by two Oxford scientists, Bryce Goodman and Seth Flaxman -- recently suggested that the new rules could affect the algorithms at the heart of Google and Facebook, which use artificial intelligence to target ads, provide relevant search results or shape a user's news feed. It probably won't be the case: Automated decisions will be allowed with a user's explicit consent or where they are "necessary for the entering or performance of a contract between the data subject and a controller." That makes the Googles and Facebooks of this world immune as long as they don't forget to require a user to approve a terms of use document that nobody ever reads....MORE