Meta Labeling AI-Generated Images on Facebook, Instagram and Threads


  • Staff

 Meta Newsroom release:

As a company that’s been at the cutting edge of AI development for more than a decade, it’s been hugely encouraging to witness the explosion of creativity from people using our new generative AI tools, like our Meta AI image generator which helps people create pictures with simple text prompts.

As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.

That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads. We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app. We’re taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.

A New Approach to Identifying and Labeling AI-Generated Content​

When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them. This is an important part of the responsible approach we’re taking to building generative AI features.

Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices.

We’re building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.

A phone showing a flagged image in Facebook.


While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies. While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it. We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.

This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So we’re pursuing a range of options. We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks. For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature. This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled.

This work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.

In the meantime, it’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.

These are early days for the spread of AI-generated content. As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content. Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has. What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do. We’ll keep collaborating with our industry peers. And we’ll remain in a dialogue with governments and civil society.

AI Is Both a Sword and a Shield​

Our Community Standards apply to all content posted on our platforms regardless of how it is created. When it comes to harmful content, the most important thing is that we are able to catch it and take action regardless of whether or not it has been generated using AI. And the use of AI in our integrity systems is a big part of what makes it possible for us to catch it.

We’ve used AI systems to help protect our users for a number of years. For example, we use AI to help us detect and address hate speech and other content that violates our policies. This is a big part of the reason why we’ve been able to cut the prevalence of hate speech on Facebook to just 0.01-0.02% (as of Q3 2023). In other words, for every 10,000 content views, we estimate just one or two will contain hate speech.

While we use AI technology to help enforce our policies, our use of generative AI tools for this purpose has been limited. But we’re optimistic that generative AI could help us take down harmful content faster and more accurately. It could also be useful in enforcing our policies during moments of heightened risk, like elections. We’ve started testing Large Language Models (LLMs) by training them on our Community Standards to help determine whether a piece of content violates our policies. These initial tests suggest the LLMs can perform better than existing machine learning models. We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it doesn’t violate our policies. This frees up capacity for our reviewers to focus on content that’s more likely to break our rules.

AI-generated content is also eligible to be fact-checked by our independent fact-checking partners and we label debunked content so people have accurate information when they encounter similar content across the internet.

Meta has been a pioneer in AI development for more than a decade. We know that progress and responsibility can and must go hand in hand. Generative AI tools offer huge opportunities, and we believe that it is both possible and necessary for these technologies to be developed in a transparent and accountable way. That’s why we want to help people know when photorealistic images have been created using AI, and why we are being open about the limits of what’s possible too. We’ll continue to learn from how people use our tools in order to improve them. And we’ll continue to work collaboratively with others through forums like PAI to develop common standards and guardrails.


 Source:

 

Attachments

  • Meta.png
    Meta.png
    8.6 KB · Views: 0
Meta is so full of itself. It can’t even detect soft porn in Reels or fake news, deep fake clips/photos and fake ads. It will label 1 in 100 Ai images.
Saying that you’re taking action might make you look good in the eyes of the Govt’s but that is as far as it goes with Facebook/Meta, who by the way can’t even protect our kids, if you’ve kept up on the situation with instagram, kids and a certain network of deviates, it’s just all rubbish.
 
Last edited:

My Computers

System One System Two

  • OS
    Windows 11 Pro 23H2 Build: 22631.3374
    Computer type
    PC/Desktop
    Manufacturer/Model
    Sin-built
    CPU
    Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (4th Gen?)
    Motherboard
    ASUS ROG Maximus VI Formula
    Memory
    32.0 GB of I forget and the box is in storage.
    Graphics Card(s)
    Gigabyte nVidia GeForce GTX 1660 Super OC 6GB
    Sound Card
    Onboard
    Monitor(s) Displays
    4 x LG 23MP75 1 x 24" LG M38H 1 x 32" LF6300 TV Monitor 1 x Wacom Pro 22" Tablet
    Screen Resolution
    All over the place
    Hard Drives
    2 x WD something Something 8TB HDD's / 2 x WD something Something 4TB HDD's / 1 x EVO 1TB SSD / 2 x QVO 1TB SSD's / 1 x EVO 250 GB SSD / 2 x QVO 1TB (External Hub) / 1 x EVO 1TB (Portable Backup Case)
    PSU
    Silverstone 1500
    Case
    NZXT Full Tower
    Cooling
    Noctua NH-D15 Elite Class Dual Tower CPU Cooler / 6 x EziDIY 120mm / 2 x Corsair 140mm somethings / 1 x 140mm Thermaltake something / 2 x 200mm Corsair.
    Keyboard
    Corsair K95 / Logitech diNovo Edge Wireless
    Mouse
    Logitech G402 / G502 / Mx Masters / MX Air Cordless
    Internet Speed
    100/40Mbps
    Browser
    All sorts
    Antivirus
    Kaspersky Premium
    Other Info
    I’m on a horse.
  • Operating System
    Windows 11 Pro 22621.2215
    Computer type
    Laptop
    Manufacturer/Model
    LENOVO Yoga 7i EVO OLED 14" Touchscreen i5 12 Core 16GB/512GB
    CPU
    Intel Core 12th Gen i5-1240P Processor (1.7 - 4.4GHz)
    Memory
    16GB LPDDR5 RAM
    Graphics card(s)
    Graphics processor is an Intel Iris Xe
    Sound Card
    optimized with Dolby Atmos®
    Screen Resolution
    QHD 2880 x 1800 OLED
    Hard Drives
    M.2 512GB
    Other Info
    …still on a horse.
It's really not that hard to spot a guy with 3 legs or a woman with 7 fingers
 

My Computer

System One

  • OS
    Dual Boot Win11 Pro/Linux Mint (Cinnamon) 21.2
    Computer type
    PC/Desktop
    Manufacturer/Model
    ASUS
    CPU
    AMD Ryzen 5 5600X 6-Core
    Motherboard
    ROG STRIX B550-F
    Memory
    32GB
    Graphics Card(s)
    Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series] (prog-if 00 [VGA controller])
    Sound Card
    OnBoard
    Monitor(s) Displays
    HP 32F x2 @60Hz
    Screen Resolution
    1920x1080
    Hard Drives
    6 SATA x @2TB ea @7200RPM
    PSU
    Corsair CV750
    Case
    AeroCool Cylon RGB Mid Tower
    Cooling
    Wraith Stealth Cooler
    Keyboard
    Logitech (wireless)
    Mouse
    Logitech (wireless)
    Internet Speed
    200+ MBPS
    Browser
    Firefox
    Antivirus
    Microsoft Defender/Linux Kernel
    Other Info
    Nylavee SK600 RGB PC 5.0 HiFi Speakers/Bluetooth
    Binnune 2.4G Bluetooth/USB Headphones w/Mic
    Gotega USB ODD/HDD CD/RW/DVD ROM
It should start flagging the bots and fake profiles too
 

My Computer

System One

  • OS
    Windows 11 Pro 23H2 (RP channel)
    Computer type
    PC/Desktop
    Manufacturer/Model
    Gigabyte
    CPU
    AMD Ryzen 5900X 12-core
    Motherboard
    X570 Aorus Xtreme
    Memory
    64GB Corsair Platinum RGB 3600MHz CL16
    Graphics Card(s)
    MSI Suprim X 3080 Ti
    Sound Card
    Soundblaster AE-5 Plus
    Monitor(s) Displays
    ASUS TUF Gaming VG289Q
    Screen Resolution
    3840x2160
    Hard Drives
    Samsung 990 Pro 2TB
    Samsung 980 Pro 2TB
    Samsung 970 Evo Plus 1TB
    Samsung 870 Evo 4TB
    Samsung T7 Touch 1TB
    PSU
    Asus ROG Strix 1000W
    Case
    Corsair D750 Airflow
    Cooling
    Noctua NH-D15S
    Keyboard
    Asus ROG Flare
    Mouse
    Logitech G903 with PowerPlay charger
    Internet Speed
    500Mb/sec
    Browser
    Microsoft Edge
    Antivirus
    Windows Defender
It should start flagging the bots and fake profiles too
All social media platforms have been doing it as an absolute necessity :-) but see the bottom paragraph in the following :confused: It's not just a technology issue!

-------------------:copilot:
Artificial intelligence (AI) plays a crucial role in identifying bots and fake profiles on social media platforms. Here are some methods used by AI systems to detect them:
  1. Activity Patterns: Researchers analyze patterns of activity to differentiate between human users and bots. For instance, Emilio Ferrara and colleagues at the University of Southern California trained an AI to detect bots on Twitter by examining differences in activity patterns between real and fake accounts12.
  2. Content Analysis: AI algorithms examine the content posted by accounts. Bots often exhibit repetitive behavior, such as posting the same content repeatedly or sharing links excessively. By analyzing the content, AI can flag suspicious accounts.
  3. Network Analysis: AI looks at the connections between accounts. Bots tend to follow a large number of users but have fewer followers themselves. Detecting such anomalies helps identify potential fake profiles.
  4. Behavioral Anomalies: AI systems learn from historical data and identify deviations from normal behavior. For example, sudden spikes in activity, unusual posting times, or excessive likes and retweets can indicate bot-like behavior.
  5. Machine Learning Models: Researchers train machine learning models using labeled data (bot or human) to predict whether an account is genuine or fake. These models consider features like follower count, engagement rate, and posting frequency.
  6. Profile Metadata: AI examines profile information, such as incomplete bios, generic profile pictures, or inconsistent details. Bots often lack personalization in their profiles.
  7. Browser Extensions: Tools like the V7 Labs Chrome extension can detect bot profiles on platforms like LinkedIn with high accuracy3. These extensions analyze various profile attributes to identify suspicious accounts.
Remember that while AI can help detect bots and fake profiles, it’s essential to balance this with ethical considerations, including freedom of expression and privacy rights4. 🤖🔍
 

My Computer

System One

  • OS
    Windows 10 Pro
All social media platforms have been doing it as an absolute necessity :-) but see the bottom paragraph in the following :confused: It's not just a technology issue!

-------------------:copilot:
Artificial intelligence (AI) plays a crucial role in identifying bots and fake profiles on social media platforms. Here are some methods used by AI systems to detect them:
  1. Activity Patterns: Researchers analyze patterns of activity to differentiate between human users and bots. For instance, Emilio Ferrara and colleagues at the University of Southern California trained an AI to detect bots on Twitter by examining differences in activity patterns between real and fake accounts12.
  2. Content Analysis: AI algorithms examine the content posted by accounts. Bots often exhibit repetitive behavior, such as posting the same content repeatedly or sharing links excessively. By analyzing the content, AI can flag suspicious accounts.
  3. Network Analysis: AI looks at the connections between accounts. Bots tend to follow a large number of users but have fewer followers themselves. Detecting such anomalies helps identify potential fake profiles.
  4. Behavioral Anomalies: AI systems learn from historical data and identify deviations from normal behavior. For example, sudden spikes in activity, unusual posting times, or excessive likes and retweets can indicate bot-like behavior.
  5. Machine Learning Models: Researchers train machine learning models using labeled data (bot or human) to predict whether an account is genuine or fake. These models consider features like follower count, engagement rate, and posting frequency.
  6. Profile Metadata: AI examines profile information, such as incomplete bios, generic profile pictures, or inconsistent details. Bots often lack personalization in their profiles.
  7. Browser Extensions: Tools like the V7 Labs Chrome extension can detect bot profiles on platforms like LinkedIn with high accuracy3. These extensions analyze various profile attributes to identify suspicious accounts.
Remember that while AI can help detect bots and fake profiles, it’s essential to balance this with ethical considerations, including freedom of expression and privacy rights4. 🤖🔍
None of this stops the bots though! Re the ethical considerations - AI and bots have no rights regarding this, they are not human
 

My Computer

System One

  • OS
    Windows 11 Pro 23H2 (RP channel)
    Computer type
    PC/Desktop
    Manufacturer/Model
    Gigabyte
    CPU
    AMD Ryzen 5900X 12-core
    Motherboard
    X570 Aorus Xtreme
    Memory
    64GB Corsair Platinum RGB 3600MHz CL16
    Graphics Card(s)
    MSI Suprim X 3080 Ti
    Sound Card
    Soundblaster AE-5 Plus
    Monitor(s) Displays
    ASUS TUF Gaming VG289Q
    Screen Resolution
    3840x2160
    Hard Drives
    Samsung 990 Pro 2TB
    Samsung 980 Pro 2TB
    Samsung 970 Evo Plus 1TB
    Samsung 870 Evo 4TB
    Samsung T7 Touch 1TB
    PSU
    Asus ROG Strix 1000W
    Case
    Corsair D750 Airflow
    Cooling
    Noctua NH-D15S
    Keyboard
    Asus ROG Flare
    Mouse
    Logitech G903 with PowerPlay charger
    Internet Speed
    500Mb/sec
    Browser
    Microsoft Edge
    Antivirus
    Windows Defender
None of this stops the bots though! Re the ethical considerations - AI and bots have no rights regarding this, they are not human
It's a matter of what to allow and what not to allow, unfortunately. For example, there are illegitimate bots for running fake accounts, but there are also legitimate bots for running news and emergency services. It's complicated.
 

My Computer

System One

  • OS
    Windows 10 Pro
It's a matter of what to allow and what not to allow, unfortunately. For example, there are illegitimate bots for running fake accounts, but there are also legitimate bots for running news and emergency services. It's complicated.
Everyone has an opinion on this.
Bots are bots, and should never be allowed on any social media platform without it being clearly stated
 

My Computer

System One

  • OS
    Windows 11 Pro 23H2 (RP channel)
    Computer type
    PC/Desktop
    Manufacturer/Model
    Gigabyte
    CPU
    AMD Ryzen 5900X 12-core
    Motherboard
    X570 Aorus Xtreme
    Memory
    64GB Corsair Platinum RGB 3600MHz CL16
    Graphics Card(s)
    MSI Suprim X 3080 Ti
    Sound Card
    Soundblaster AE-5 Plus
    Monitor(s) Displays
    ASUS TUF Gaming VG289Q
    Screen Resolution
    3840x2160
    Hard Drives
    Samsung 990 Pro 2TB
    Samsung 980 Pro 2TB
    Samsung 970 Evo Plus 1TB
    Samsung 870 Evo 4TB
    Samsung T7 Touch 1TB
    PSU
    Asus ROG Strix 1000W
    Case
    Corsair D750 Airflow
    Cooling
    Noctua NH-D15S
    Keyboard
    Asus ROG Flare
    Mouse
    Logitech G903 with PowerPlay charger
    Internet Speed
    500Mb/sec
    Browser
    Microsoft Edge
    Antivirus
    Windows Defender
LOL, there is a whole range of bots, no social media platform would work without bots, the internet at large is full of bots and would not work without bots, but of course, there are always bad apples :eek1:

Edit: I did not mean to be terse, so here are bots that people and businesses use to manage their social media accounts. You can of course manage your social media account on your own, but that's a pretty naked endeavor 🤷‍♂️

 
Last edited:

My Computer

System One

  • OS
    Windows 10 Pro
Back
Top Bottom