One of our fave tech journos, Tiernan Ray, at ZDNet, September 15:
The OPRO program automates the task of trying different prompts that get closer to solving a task.
You've just figured out your next career move: becoming a wiz at prompt engineering, the art of crafting the best input phrase to a generative artificial intelligence program such as OpenAI's ChatGPT.
Not so fast: The art of prompting may itself be taken over by automation via large language models.
Also: 7 advanced ChatGPT prompt-writing tips you need to know
In a paper posted last week by Google's DeepMind unit, researchers Chengrun Yang and team created a program called OPRO that makes large language models try different prompts until they reach one that gets closest to solving a task. It's a way to automate the kinds of trial and error that a person would do by typing.
The research paper, "Large Language Models as Optimizers," posted on the arXiv pre-print server, details an experiment in how to "optimize" anything with a language model, meaning, to make the program produce better and better answers, getting closer to some ideal state.
Yang and team decided, instead of explicitly programming that ideal state, to use large language models to state in natural language the ideal to be reached. That allows the AI program to adapt to constantly changing requests for optimization on different tasks.
As Yang and co-authors write, the language-handling flexibility of large language models "lays out a new possibility for optimization: instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions."
***
...In effect, Meta-Prompt is a like a person sitting at the keyboard typing lots of new possibilities based on what they've seen work and not work before. Meta-Prompt can be hooked up to any large language model to produce the actual prompts and answers. The authors test a bunch of different large language models, including GPT-3 and GPT-4, and Google's own PaLM 2 language model....
....MUCH MORE