People Should Be in Charge of Their Data (Thanks, EU)
A clash between European Union bureaucracy and artificial intelligence is a plot worthy of a cyberpunk thriller. It will take place in real life in 2018, once some European data protection laws, passed earlier this year, go into effect. And, though we might instinctively be tempted to endorse progress over regulation, the EU is on the side of the angels in this battle.
The EU's General Data Protection Regulation and a separate directive contain provisions to protect people against decisions made automatically by algorithms. "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her," is how the regulation puts it. The directive, which will regulate police and intelligence work, is even tougher, prohibiting the use of data on ethnicity, religion political views, sexual orientation or union membership to use any automated decisions.
The idea is that, as the regulation's preamble says, the processing of personal data should be "subject to suitable safeguards." In real-life terms, this means that if a bank denies a loan based on the algorithmic processing of a person's data, an insurance company sets a high premium or a person is singled out for special police attention as a result of some blanket data-gathering operation like those Edward Snowden revealed at the National Security Agency, people should be able to challenge these decisions and have a human look into them. They should also be able to demand an explanation of why the decision was made.