Meta will start labeling AI generated images on Instagram and Facebook

When it comes to Artificial Intelligence, it feels like we’re really crossing the Rubicon here. We’ve progressed from Alexa refusing to set a timer and Waze telling you to drive into a lake and into the world of dangerous, realistically-looking deepfakes. We already have serious issues with graphic deepfakes coming for the likes of high-school students, underage actresses, and Taylor Swift with little recourse to stop them once they’ve spread all over social media. There is currently no federal legislation regarding non-consensual, sexually explicit deepfakes and only one bill, introduced to the House of Representatives last year, that has stalled.

Deepfakes involve more than just explicit photos, though. There’s also a very real danger of nefarious persons using AI to sow misinformation to susceptible audiences. And once any of these images hit social media, it’s nearly impossible to contain them. In an effort to combat confusion (and worse), Meta has announced that in “the coming months,” they’re going to start labeling AI-created images uploaded to Facebook, Instagram, and Threads. Until then, they’re taking the YouTube/Tik Tok approach of relying on users to self-identify their artificial creations.

When an AI-generated image of the pope in a puffy white coat went viral last year, internet users debated whether the pontiff was really that stylish. Fake images of former President Donald Trump being arrested caused similar confusion, even though the person who generated the images said they were made with artificial intelligence. Soon, similar images posted on Instagram, Facebook or Threads may carry a label disclosing they were the product of sophisticated AI tools, which can generate highly plausible images, videos, audio and text from simple prompts.

Meta, which owns all three platforms, said on Tuesday that it will start labeling images created with leading artificial intelligence tools in the coming months. The move comes as tech companies — both those that build AI software and those that host its outputs — are coming under growing pressure to address the potential for the cutting-edge technology to mislead people.

Those concerns are particularly acute as millions of people vote in high-profile elections around the world this year. Experts and regulators have warned that deepfakes — digitally manipulated media — could be used to exacerbate efforts to mislead, discourage and manipulate voters.

Meta and others in the industry have been working to develop invisible markers, including watermarks and metadata, indicating that a piece of content has been created by AI. Meta said it will begin using those markers to apply labels in multiple languages on its apps, so users of its platforms will know whether what they’re seeing is real or fake.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Nick Clegg, Meta’s president of global affairs, wrote in a company blog post. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

The labels will apply to images from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock — but only once those companies start including watermarks and other technical metadata in images created by their software. Images created with Meta’s own AI tools are already labeled “Imagined with AI.”

That still leaves gaps. Other image generators, including open-source models, may never incorporate these kinds of markers. Meta said it’s working on tools to automatically detect AI content, even if that content doesn’t have watermarks or metadata. What’s more, Meta’s labels apply to only static photos. The company said it can’t yet label AI-generated audio or video this way because the industry has not started including that data in audio and video tools.

For now, Meta is relying on users to fill the void. On Tuesday, the company said that it will start requiring users to disclose when they post “a photorealistic video or realistic-sounding audio that was digitally created or altered” and that it may penalize accounts that fail to do so.

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Clegg said.

That expands on Meta’s requirement, introduced in November, that political ads include a disclosure if they digitally generated or altered images, video or audio. TikTok and YouTube also require users to disclose when they post realistic AI-generated content. Last fall, TikTok said it would start testing automatically applying labels to content that it detects was created or edited with AI.

[From NRP]

Umm…it’s a start? Or is Meta just doing a little CYA? The “gaps” mentioned in the article seem pretty wide to me. Ugh. I’m not gonna lie: I’m terrified of the shenanigans and malarkey that will absolutely go down to mess up the elections processes worldwide. More than 60 countries will have national elections in 2024, which will represent just about half of the global population. The stakes are high and as we’ve seen over the last decade, it does not take much for misinformation to run rampant and be accepted as truth. I can absolutely see a scenario in which certain bad faith political actors use AI-generated propaganda featuring their opponents and when it’s labeled as such, they start crying “fake news” and claim they’re being targeted unfairly. Buckle up, friends. It’s going to be another stressful ride.

You can follow any responses to this entry through the RSS 2.0 feed.

6 Responses to “Meta will start labeling AI generated images on Instagram and Facebook”

Comments are Closed

We close comments on older posts to fight comment spam.

  1. TN Democrat says:

    A magaT family member recently (proudly) posted a deep fake of Biden looking frail/clumsy and tRUMP looking tall, trim and not ill (clearly fake). My elderly family member had no clue the images were fake and is down the orange baffoon rabbithole. Hopefully, Meta is making a real effort to combat the bots/AI fake images before the election cycle really starts. Some people believe everything they see online.

  2. Kirsten says:

    This is a good step, but there’s so much nuance here. There’s a huge difference between people who are using AI to purposefully obfuscate (such as in political or celebrity content); people who are using it for creative content; people who are using it for “harmless” content, like art-inspired avatars; and people who are using it to create what are essentially stock photos for advertising.

    Labeling photos won’t do anything without additional information about content, because all of those things are not equally nefarious, and grouping them together will only create additional confusion. People can’t think that a deep fake video of Biden is on par with an AI-generated image of a barista for a coffee shop ad, otherwise they won’t care.

    • Chloe says:

      @Kirsten, that is a great point about nuance and grouping them together. I wonder if the general public will be able to accept that a coffee shop advertisement vs a political deep fake are on totally different levels or if the conversation will include AI-generated images broadly, which will lessen the impact of the bad stuff. (I haven’t had coffee yet so I hope this comment makes sense!)

  3. Normades says:

    I am deeply against AI “art” and also all the changes in our society starting with self check out. It’s going to wipe out so many human jobs like graphic artists, creative directors etc. My bro in law who is a graphic artist for big companies already said my job is dying and won’t exist in a couple of years. I hate this, I really really hate this. I was really dismayed by the post about self check out. So many people don’t want contact with actual humans and I think that’s really sad.