(also, go long data centers)
From the journal Nature, January 23:
A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
More than 20 years ago, the Japanese technology conglomerate Fujitsu created an accounting software system for the UK Post Office. ‘Horizon’ was rolled out to branches across the country, often small, independently run shops that collect and deliver letters and parcels to tens of thousands of people. At the time, it was the largest civilian IT project roll-out in Europe. Horizon was beset by errors, a phase familiar to any user of a new IT system. Except, these were not irritatingly small bugs. In some cases, the Horizon IT system told members of staff that they were not depositing enough money at the end of a day’s trading, leading to the ‘loss’ of thousands of pounds every night.
Many Post Office workers were accused of theft or false accounting. They were told to either pay the difference or face prosecution. More than 700 people were prosecuted, and an average of 30 were imprisoned each year between 2000 and 2014. Homes, livelihoods and relationships were wrecked. Some of those affected took their own lives.
Many details have yet to emerge. One of the most egregious is that Fujitsu said last week that it knew there were bugs in the system around the time it delivered Horizon to the Post Office in 1999.
However, one aspect of the scandal has attracted comparatively little attention: that the laws of England and Wales presume that computer systems do not make errors, which makes it difficult to challenge computer output. National and regional governments around the world where these laws exist need to review them, as there are implications for a new generation of IT systems — namely those using artificial intelligence (AI). Companies are augmenting IT systems with AI to enhance their decision-making. It is inconceivable to think that this is happening under legal systems that presume computer evidence is reliable. Until such laws are reviewed, more innocent people are at risk of being denied justice when AI-enhanced IT systems are found to be in error.
The core source of potential injustice with a law that presumes computer operations are fundamentally correct is that if someone wants to question or challenge computer evidence, the onus is on them to produce proof of improper use or operation. This could, for example, be through a record of the software’s relevant code or keystroke logs. Yet, accessing this information is difficult. In most Horizon cases, defendants had no way of knowing which documents or records would show that a relevant error had occurred, and so could not request that these be disclosed by the Post Office when they were taken to court. This imbalance of knowledge meant that individuals had little hope of being able to defend themselves against the charges.
Finding a way forward
Some lawyers and researchers involved in the defence of those proseucted by the Post Office are advocating a different approach. Paul Marshall, a barrister at Cornerstone Barristers in London, and his colleagues argue in an article (P. Marshall et al. Digit. Evid. Electron. Signat. Law Rev. 18, 18–26; 2021) that the presumption that computer evidence is reliable needs to be replaced with a requirement that relevant data and code will be disclosed in legal cases....
....MUCH MORE