OpenAI and Meta, two giants in the field of artificial intelligence, are rolling out new standards aimed at increasing transparency in AI-generated media. While these standards offer insight into the efforts of major AI companies to self-regulate, some experts remain sceptical about their effectiveness.
OpenAI recently announced that all images produced using its ChatGPT and Dall-E 3 tools will now include C2PA metadata. C2PA, established by the Coalition for Content Provenance and Authenticity, consists of tech and media heavyweights like Adobe, Microsoft, the BBC, and Sony. This collaboration has resulted in a technical standard allowing users to trace media back to its origin through metadata.
Meta, the parent company of Facebook and Instagram, has also pledged to label AI-generated content on its platforms. In a blog post, Meta’s president of global affairs, Nick Clegg, outlined plans to employ tools utilizing C2PA standards to identify such content. Images produced by Meta’s in-house AI image generator will bear a watermark reading “Imagined with AI.”
According to Kevin Guo, founder and CEO of AI content moderation company Hive, while tracking media origin through metadata is commendable, there are significant challenges to widespread adoption. Guo notes the ease of deleting metadata from files and doubts the feasibility of major AI content generators universally adopting a standard.
Guo suggests an alternative approach: training AI models to detect and combat AI-generated content. He highlights Hive’s success in this area, emphasizing its potential impact without relying on goodwill from AI users and companies. However, he expresses reservations about the actual impact of this approach.