Censorship and “content moderation” at openai

Erin Woo Stephanie Palazzolo:

Under Willner, a former Meta Platforms content moderation executive, the trust and safety team sometimes ran into conflicts with other executives at OpenAI. Developers working on apps built on OpenAI’s technology had complained, for instance, that the team’s vetting procedures took too long. That caused executives to overrule Willner on how the app review process worked.

The shift to the new team follows a debate over how much of OpenAI’s content moderation can be handled via automation. It’s part of a broader discussion about whether OpenAI should keep its staff small and nimble or scale up as usage of its products takes off.

The debate about how to handle content moderation echoes similar conversations at Facebook and other social media platforms in recent years. Facebook, for example, was reluctantto hire large numbers of content moderators for much of its history, before it reversed course in 2017.

The looming U.S. presidential elections pose a test for OpenAI and other artificial intelligence firms. A Wired report from last week found that Microsoft’s AI chatbot Copilot, which uses OpenAI’s GPT-4, consistently responds to questions about elections with conspiracy theories and incorrect information.