meta labeling AI

Meta to label AI-generated content on all its platforms

meta labeling AI

Just after it was criticized for its “inconsistent fake videos policy”, Meta started labeling AI-generated images across its platforms. This includes Facebook, Instagram, and Threads, and they will label images produced by external companies such as OpenAI, Google, Midjourney, but also Meta’s own AI image generator.

Meta announced on Tuesday that it will start labeling AI content. Nick Clegg, Meta’s President of Global Affairs, outlined the company’s approach in a blog post. He emphasized the importance of this initiative during a year marked by significant elections globally. ” We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” Clegg writes. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.”

When people create photorealistic images with the Meta AI feature, the platform puts visible markers on the images and both invisible watermarks and metadata embedded within image files. “Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them,” Clegg notes. “This is an important part of the responsible approach we’re taking to building generative AI features.”

But Meta now plans to label content produced by external services. The company is creating advanced tools that can detect hidden markers in the content, such as the AI-generated information in the C2PA and IPTC technical standards. This system will label images from tech giants like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they incorporate metadata into their image creation tools.

What about other AI content?

While some companies are embedding signals into their AI-generated images, Meta’s Nick Clegg admits that similar advancements in AI tools for audio and video production are lagging. This means that Meta currently can’t detect and label AI-generated audio and video content from other companies at the same level it can do for images. “As the industry progresses towards integrating these capabilities, Meta is implementing a feature that allows users to disclose when they share AI-generated video or audio content. This will enable Meta to attach a label to such content,” Clegg writes.

In order to promote transparency and accountability, Meta has made it mandatory for users to disclose and label AI and digitally manipulated content. The company may penalize them if they don’t comply with this policy. What’s more, Meta holds the right to apply a more prominent label to any digitally created or altered content, such as images, videos, or audio, if it is found that such content poses a high risk of misleading the public on important matters. This approach aims to provide users with more information and context.

“This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So we’re pursuing a range of options. We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks. For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature. This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled.”

This announcement came after criticism from Meta’s independent Oversight Board regarding the company’s policies on misleadingly altered videos. Clegg concurred with the board’s suggestion that labeling such content could be more effective than removing it. He views the new labeling initiative as a step toward addressing the board’s recommendations and driving momentum for similar actions across the industry.

[via Ars Technica; image credit: Meta]