Saturday, September 16, 2023

"What happens when AI trains itself?"

 From Prospect Magazine, September 6:

Artificial intelligence will soon run out of human sentences to learn from. What are its options then?

The Boott Cotton Mills Museum in Lowell, Massachusetts shakes and rattles with the movement of water-powered looms, massive and complex machines more than a century old. Based in what is now a national park, a few dozen of the looms in a massive weave room have been put back into service. Visitors can have a taste of what factory life might have felt like from 1835—when the complex was built—into the early 20th century. A sign warns that the weave room is hot, loud, filled with cotton dust, and that visitors might find it overwhelming and need to leave.

Unfortunately, that option wasn’t available to thousands of workers who toiled in gruelling conditions to keep power looms running from Lowell in the US to Lancashire in the UK. From nimble-fingered children who reached into the works to re-tie broken threads, to grown men who loaded the massive bobbins of thread and unloaded the bolts of finished cloth, the automated marvel of the power loom was fed and cared for by armies of unautomated humans. The looms could produce cloth faster and cheaper than hundreds of professional weavers working in parallel, but they were powerless without hundreds of humans skilled enough to keep the machines in good order. 

We are starting to learn that modern AI systems have a lot in common with the power looms of the Industrial Revolution. These systems already generate impressively detailed images and texts, and their advocates promise they will transform, well, pretty much everything, from how we search for information, book a trip or shop for clothes to how we organise our workplaces and wider society. But much as the loom’s bulk obscures our sight of the children mending broken threads, the impressive achievements of these massive systems tend to blind us to the human labour that makes them possible.

When an image-generation program like Stable Diffusion produces an illustration from a written prompt—a blue bowl of flowers in the style of Van Gogh, say—it relies on massive sets of labelled data: images that show blue bowls, bowls of flowers and Van Gogh paintings, all carefully labelled by humans. Reporting in the Verge, Josh Dzieza interviewed some of the thousands—possibly millions—of workers who label these images from their computers in countries like Kenya and Nepal for as little as $1.20 an hour. Other annotators in the US give feedback to chatbots about which of their prompt responses are more conversational, receiving $14 an hour to provide the “human” in a process known as “reinforcement learning from human feedback”, or RLHF, the method that’s allowed ChatGPT to provide such lifelike results.

Even if AI models can find their way through the legal thickets, another barrier may yet hinder them: the limits of human creativity

While these annotators are on the frontlines of feeding the machine, other human contributors may not be aware they are part of the AI supply chain. ChatGPT learned to write sonnets by ingesting Shakespeare and Donne, but it learned how to answer thousands of other questions from content published on the web. A team at the Washington Post worked with the Allen Institute for AI to study the “Common Crawl”, a giant set of data that includes millions of publicly accessible websites, known to be a primary material for large language models such as Google’s T5 and Facebook’s LLaMA. (OpenAI, creators of ChatGPT, won’t release what data was used to train their model, but may well use Common Crawl.) The sites that provided the most data to these massive AIs aren’t hard to predict: Wikipedia ranks #2, and many of the top sites are respected newspapers....

....MUCH MORE

Think about that last sentence.

Previously: