Connect with us

News

Meta develops strategy against AI-generated fake news

Published

on

Meta, formerly known as Facebook, is developing a solution to safeguard the intellectual property rights of AI-generated images. They employ a deep-learning model to embed imperceptible watermarks detectable by machines.

According to a recent report from Meta, the company plans to integrate invisible watermarks into all AI-generated images from its “imagine with Meta AI experience.”

Meta AI, one of many AI chatbots generating content based on user prompts, is proactively taking steps to prevent misuse, protect users from deception, and uphold platform integrity.

Furthermore, the new watermarking feature aims to deter creators from removing the watermark and claiming the content as their work.

“In the coming weeks, we’ll add invisible watermarking to the image with Meta AI experience for increased transparency and traceability.”

Meta’s watermarking system, using a deep-learning model, creates watermarks invisible to the human eye but detectable by a corresponding model. These watermarks resist common image manipulations.

Meta intends to implement the watermarking service on various platforms creating AI-generated images, including Facebook, Instagram, and WhatsApp.

The ‘reimagine’ feature on Facebook Messenger and Instagram, allowing users to request AI-generated image modifications, will also include the invisible watermark feature.

While other AI image generators offer traditional watermark options, Meta AI asserts its invisible watermarks resist removal by AI-based methods.

As AI-powered tools’ popularity rises, concerns grow about scams using AI-generated content for deception. Celebrities and entrepreneurs call for more regulation and protection.

Scammers use AI tools to create fake content of well-known figures, spreading misinformation online. An AI-generated image of a Pentagon explosion in May briefly impacted the stock market, emphasizing the risk of misinformation from AI-created content.

Despite causing panic, authorities promptly clarified that no explosion or incident occurred.

During that month, Amnesty International, a human rights advocacy organization, mistakenly utilized an AI-generated image portraying police brutality in their campaigns against authorities.

“We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,” stated Erika Guevara Rosas, director for Americas at Amnesty.

 

Read also: Will Ordinals be dead by 2024?

0 0 votes
Article Rating
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Latest Episode on Inside Blockchain

Crypto News Update

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x