On the 8th August 1963, a gang of fifteen men boarded the Royal Mail train heading from London to Glasgow. They were there to carry out a robbery. In the end, they made off with £2.6 million (approximately £46 million in today’s money). The robbery had been meticulously planned. Using information from a postal worker (known as “the Ulsterman”), the gang waylaid the train at a signal crossing in Ledburn, Buckinghamshire. They put a covering over the green light at the signal crossing and used a six-volt battery to switch on the red light. When one of the train’s crew went to investigate, they overpowered him and boarded the train. They used no firearms in the process, though they did brutally beat the train driver, Jack Mills. Most of the gang were arrested and sent to jail, but the majority of the money was never recovered. It was known as the ‘Great Train Robbery’.
In November and December 2013, the US-retailer Target suffered a major data breach. Using malware made by a 17 year-old Russian hacker, a criminal gang managed to steal data (including credit card numbers) from over 110 million Target customers. The total cost of the breach is difficult to estimate. Figures suggest that the criminals made up to $54 million selling the credit card data on the black market; the breach is likely to have cost financial institutions around $200 million in cancelling and reissuing cards (Target have themselves entered into settlements with credit card companies costing at least $67 million); it had a significant impact on Target’s year end profits in 2013; and they promised to spend over $100 million upgrading their security systems.
So in fifty years we went from a gang of 15 meticulously planning and executing a train robbery in order to steal £2.6 million, to a group of hackers using malware manufactured by a single Russian teen, stealing customer data without having to leave their own homes, with an estimated cost of over $350 million.
These two stories are taken from Marc Goodman’s eye-opening book Future Crimes. In the book, Goodman uses the dramatic leap in the scale of criminal activity — illustrated by these two stories — to make an interesting observation. He argues that the exponential growth in networking technology may be leading us toward a ‘crime singularity’. The phrase is something of a throwaway in the book, and Goodman never fully explains what he means. But it intrigued me when I read it. And so, in this post, I want to delve into the concept of a crime singularity in a little more depth. I’ll do so in three phases. First, I’ll look to other uses of the term ‘singularity’ in debates about technology and see if they provide any pointers for understanding what a crime singularity might be. Second, I’ll outline what I take to be Goodman’s case for the crime singularity. And third, I’ll offer some evaluations of that case.
1. What would a singularity of crime look like?
I’m going to start with the basics. The term ‘singularity’ is bandied about quite a bit in conversations about technology and the human future. It originates in mathematics and physics and is used in those disciplines to describe a point at which a mathematical object is not well-defined or well-behaved. The typical example from physics is the gravitational singularity. This is something that occurs in black holes and represents a point in space time at which gravitational forces approach infinity. The normal laws of spacetime breakdown at this point. Hence, objects that are represented in the central equations of physics are no longer well-behaved.
The physicist and science fiction author Vernor Vinge co-opted the term in a 1993 essay to describe something he called the ‘technological singularity’. He explained this as a hypothetical point in the not-too-distant human future when we would be able to create superhuman artificial intelligence. In this he was hearkening back to IJ Good’s famous argument about an intelligence explosion. The idea is that if we manage to create greater-than-human AI, then that AI will be able to create even greater AI, and pretty soon after you would get an ‘explosion’: ever more intelligent AI being created by previous generations of AI. Vinge suggested that at this point the ‘human era’ would be over: all the concepts, values and ideas we hold dear may cease to be important. Hence, the point in time at which we create the first superintelligence is a point at which everything becomes highly unpredictable. We cannot really ‘see’ beyond this point and guess what the world will be like. In this sense, Vinge’s singularity is akin to the gravitational singularity in a black hole: you cannot see beyond the event horizon of the black hole, and into the gravitational singularity, either.
Ray Kurzweil took Vinge’s idea and expanded upon it greatly in his 2006 book The Singularity is Near. He linked it to exponential improvements in information technology (originally identified by Gordon Moore and immortalised in the eponymous Moore’s Law). Using graphs that depicted these exponential improvements, he tried to predict the point in history when we would reach the prediction horizon, settling on the year 2045. Kurzweil’s imagined singularity involved the fusion of man with machine as well as the creation of superhuman artificial intelligence. One of his infamous graphs is depicted below.
Culling from the work of Vinge and Kurzweil, I think it is fair to say that the term ‘singularity’, when used in debates about technology, appeals to one or both of the following:......MUCH MORE