Tuesday, August 15, 2023

"OpenAI Suggests Companies Use GPT-4 for Content Moderation "

 No.

From PYMNTS.com, August 15:

OpenAI said companies can use its latest large language model (LLM), GPT-4, to develop artificial intelligence (AI)-assisted content moderation systems.

Using GPT-4, companies can perform content moderation with more accurate and consistent labels, a faster feedback loop for policy refinement and a reduced need for human intervention, the company said in a Tuesday (Aug. 15) blog post.

“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators,” OpenAI said in the post. “Anyone with OpenAI API [application programming interface] access can implement this approach to create their own AI-assisted moderation system.”

Content moderation is often a slow and challenging process due to its requirement for meticulousness, sensitivity to context, and quick adaptation to new use cases, according to the post. With the help of GPT-4, the process of policy updates and labeling can be cut down from “months to hours.”

Policies can be refined using GPT-4 in an iterative process, gaining insights into the reasoning behind the labels and ambiguity in policy definitions, and providing further clarification in policy, the post said.

GPT-4 is also able to understand and generate natural language as well as make moderation judgments based on the policy guidelines provided, per the post....

....MORE

Not unless they open the black box. There's no reason to think Sam Altman's biases are any purer than anyone else's.