Wednesday, March 13, 2024

Media—At the NYT: "AI news that's fit to print"

From Zach Seward, March 11:

My talk at SXSW 2024

I just gave this talk at SXSW. It was my first public presentation since starting my new job at The New York Times. I used to come to SXSW all the time, but it had been several years since my last trip, so returning was a fun nostalgia trip. (The last talk I gave at SXSW was about "platishers." Yep.) Anyway, this time the topic was AI for journalism. What follows are my slides, script, and references from the talk.

Hi, I'm Zach Seward, the editorial director of AI initiatives at The New York Times, where I'm building a newsroom team charged with prototyping potential uses of machine learning for the benefit of our journalists and our readers. Before that, I co-founded and spent more than a decade helping to run the business news startup Quartz, where we built a lot of experimental news products, some with AI.

I started at The Times not even three months ago, so don't expect too much detail today about what we're working on—I don't even know yet. What I thought would be helpful, instead, is to survey the current state of AI-powered journalism, from the very bad to really good, and try to draw some lessons from those examples. I'm only speaking for myself today, but this certainly reflects how I'm thinking about the role AI could play in The Times newsroom and beyond.

We're going to start today with the bad and the ugly, because I actually think there are important lessons to draw from those mistakes. But I'll spend most of my time on really great, inspiring uses of artificial intelligence for journalism—both on uses of what you might call "traditional" machine-learning models, which are excellent at finding patterns in vast amounts of data, and also some excellent recent uses of transformer models, or generative AI, to better serve journalists and readers.


When AI journalism goes awry

CNET (Red Ventures)

So let's start with those mistakes. Last January, CNET, the tech news site owned by Red Ventures, was revealed to be publishing financial advice—how much to invest in CDs, how to close your bank account, etc.—using what it called "automation technology," although the bylines simply said, "Written by CNET Money Staff."

The articles were riddled with errors emblematic of LLM "hallucinations" and had to be corrected. Some of the articles also plagiarized from other sources, another pitfall of writing copy whole-cloth with generative AI. Six months later, the same articles were thoroughly updated by humans whose names and photographs now appear in the byline area under the proud heading, "Our experts."

This kind of advisory content has attracted some of the worst examples, I think, because it's already seen by publishers as only quasi-journalism that really exists only to get you to buy a CD or open a new bank account through one of their affiliate links. It doesn't often have the reader's true interests at heart, and so who cares if the copy is sloppily written by a bot? This pattern repeats itself a lot.....

....MUCH MORE