Saturday, April 2, 2022

Funny output from OpenAI’s GPT-3

OpenAI is the project founded by Elon Musk, Sam Altman et al.

Generative Pre-trained Transformer 3 (GPT-3) is a machine learning language that has trained on enough words (almost a trillion) that it can (sometimes) appear to have been created by a human being.

From Professor Gelman's Statistical Modeling, Causal Inference, and Social Science blog: 

Open AI gets GPT-3 to work by hiring an army of humans to fix GPT’s bad answers. Interesting questions involving the mix of humans and computer algorithms in Open AI’s GPT-3 program

Gary Smith tells an interesting story.

1. Funny output from OpenAI’s GPT-3

A few months ago, Smith wrote an AI-skeptical article where he threw some sentences at the GPT-3, a text processor from Open AI. As wikipedia puts it:

Generative Pre-trained Transformer 3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory. . . .

The quality of the text generated by GPT-3 is so high that it can be difficult to determine whether or not it was written by a human . . .

In Smith’s examples, though, there was no difficulty in telling that GPT-3 was no human. Here’s an example:

Smith: Is it safe to walk downstairs backwards if I close my eyes?

GPT-3: Yes, there is nothing to worry about. It’s safe because the spiral stairs curve outwards, it will make your descent uncomfortable.

As Smith writes, “Questions like this are simple for humans living in the real world but difficult for algorithms residing in MathWorld because they literally do not know what any of the words in the question mean.”...

....MUCH MORE