Follow us on Google News Subscribe!

Is Meta Winning the War on Fake News? AI Content Gets a Label in May 2024

Meta Fights Disinformation! Learn how AI content labeling on Facebook & Instagram boosts trust
Please wait 0 seconds...
Scroll Down and click on Go to Link for destination
Congrats! Link is Generated

According to Meta, owner of Facebook and Instagram, new measures will be taken to combat disinformation, particularly in the context of the upcoming elections. These measures would include labeling content generated by artificial intelligence (AI).

Is Meta Winning the War on Fake News? AI Content Gets a Label in May 2024

AI is much more identifiable.

Starting in May, videos, images, and sounds created with AI on Facebook and Instagram will be marked with a “Made with AI” label.

This initiative expands an existing policy that previously only covered a small portion of altered videos.

Meta also plans to place more visible and distinct labels on digitally altered media that pose a “particularly high risk of materially misleading the public about an important issue,” whether it was created using AI or artificial intelligence or other tools.

The AI-generated image detection system

In terms of technology, Meta had already announced a plan to detect images created by other companies' AI tools through invisible markers embedded in the files, although without specifying a launch date.

This labeling method would apply to content posted to Facebook, Instagram, and Threads, while other Meta services, such as WhatsApp and Quest virtual reality headsets, would be governed by different rules.

The impact on the American elections

The changes come just months ahead of November's U.S. presidential election, which tech researchers say could be transformed by generative AI technologies.

Change in Meta rules following the Biden affair

In February, Meta's oversight board called the company's existing rules on manipulated media "inconsistent" after reviewing a video of Joe Biden posted to Facebook last year that had been falsely altered.

The problem with Joe Biden's video was that it had been altered to give a false impression of his actions or words.

However, under Meta's existing policy at that time, this video did not violate the rules since it was not generated by AI and did not make Biden say words that he did not say.

In other words, the video remained in a sort of gray area, not being covered by existing rules, despite its potential for misinformation.

The Oversight Board therefore called these rules “inconsistent,” suggesting that they were not comprehensive or clear enough to effectively address all types of manipulated media.

This particular case highlighted that videos could be misleading even if they did not fit into the strict categories defined by Meta's rules, such as those altered by AI or radically changing a person's words.

The fight against disinformation

Meta specifies that it will continue to remove from its platforms any content, created by a human or an AI, that goes against its rules against interference in the electoral process, intimidation, harassment, violence, or any other policy of its community standards.

It also relies on a network of around 100 independent fact-checkers to identify AI-generated content that is false or misleading.

Meta vs. Disinformation: AI Content Labeling Explained (FAQ)

Categorised Posts

It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.