Monday, October 22, 2018

Deep Dive: "An Alternative History of Silicon Valley Disruption"

We haven't been linking to Wired much over the last year or two, leaning more toward Ars Technica, The Register, Fast Company and even Red Herring (yes, it lives!) for generalist tech news and commentary.

From time to time, though Wired just hits it out of the park.

From Wired:
A few years after the Great Recession, you couldn’t scroll through Google Reader without seeing the word “disrupt.” TechCrunch named a conference after it, the New York Times named a column after it, investor Marc Andreessen warned that “software disruption” would eat the world; not long after, Peter Thiel, his fellow Facebook board member, called “disrupt” one of his favorite words. (One of the future Trump adviser’s least favorite words? “Politics.”)

The term “disruptive innovation” was coined by Harvard Business School professor Clayton Christensen in the mid-90’s to describe a particular business phenomenon, whereby established companies focus on high-priced products for their existing customers, while disruptors develop simpler, cheaper innovations, introduce the products to a new audience, and eventually displace incumbents. PCs disrupted mainframes, discount stores disrupted department stores, cellphones disrupted landlines, you get the idea.

In Silicon Valley’s telling, however, “disruption” became shorthand for something closer to techno-darwinism. By imposing the rules of nature on man-made markets, the theory justified almost any act of upheaval. The companies still standing post-disruption must have survived because they were the fittest.

“Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not,” Andreessen wrote in his seminal 2011 essay on software in the Wall Street Journal. “This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again.”

Even after the word lost its meaning from overuse, it still suffused our understanding of why the ground beneath our feet felt so shaky. They tried to freak us out and we believed them. Why wouldn’t we? Their products were dazzling, sci-fi magic come to life. They transformed our days, our hours, our interior life. Fear of being stranded on “the wrong side,” in turn, primed us to look to these world-beating companies to understand what comes next.

It is only now, a decade after the financial crisis, that the American public seems to appreciate that what we thought was disruption worked more like extraction—of our data, our attention, our time, our creativity, our content, our DNA, our homes, our cities, our relationships. The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings.
They promised the open web, we got walled gardens. They promised individual liberty, then broke democracy—and now they’ve appointed themselves the right men to fix it.

But did the digital revolution have to end in an oligopoly? In our fog of resentment, three recent books argue that the current state of rising inequality was not a technological inevitability. Rather the narrative of disruption duped us into thinking this was a new kind of capitalism. The authors argue that tech companies conquered the world not with software, but via the usual route to power: ducking regulation, squeezing workers, strangling competitors, consolidating power, raising rents, and riding the wave of an economic shift already well underway.

Job Insecurity
Louis Hyman’s new book, Temp: How American Work, American Business, and the American Dream Became Temporary, argues that many of the dystopian business practices we associate with fast-growing tech platforms—operating with a small group of well-paid engineers, surrounded by contractors—began in the 1970’s when McKinsey consultants and business gurus pushed for flexible labor over job security as a way to maximize profits. But from its earliest days, Silicon Valley said automation was the reason high-tech companies were more profitable and productive.

For instance, in 1984, along with the Macintosh computer, Apple also introduced a $20 million “Robot Factory” in Fremont, California, that the company called “the most automated factory in the Western world,” even though it was 140 human beings, “mostly women, mostly immigrants–who actually put the Macintosh together,” Hyman says. In that, it was like the rest of the fast-growing electronics industry, which relied on undocumented workers and immigrants for its factories and temps for its offices to create a “buffer zone” to keep layoffs off the front page.

Apple’s use of the word “robot” turned out to be a “a very important cultural sleight of hand,” Hyman says. “This rhetorical distinction helped Silicon Valley employ workers in ways that never would have happened in postwar Detroit,” because unofficial and subcontracted workers were not protected by the same wage and safety rights.

To Hyman, an economic historian at Cornell, this explains the absence of labor unions in tech. “Managers wanted obedient employees—preferably immigrants. While technical knowledge, and venture capital, was lauded for the valley’s achievements, that success was made possible by a hidden underworld of flexible, poorly paid labor,” he writes.

Decades later, Uber could stay flexible because workers had few options. But observers often conflated cause and effect, blaming the gig economy, its use of non-employee contractors, and the unfeeling efficiency of smartphone apps. “Uber did not cause this precarious economy. It is the waste product of the service economy,” Hyman counters. “Uber is possible because shift work, even with a W-2 is so bad.”

The social disruption came first, and technology was built to exploit it....
...MUCH MORE