oann

Google joins AI content deciphering coalition


(Photo by LIONEL BONAVENTURE/AFP via Getty ⁣Images)

OAN’s James Meyers
11:50⁤ AM – Thursday, February ​8, 2024

Tech giant Google ‍has joined forces with other industry⁤ leaders, including Adobe, Intel, ‍and Microsoft, ‍to tackle the issue ⁢of identifying AI-altered media. This collaboration aims to⁣ develop a solution that can determine when​ a piece of media has been manipulated by artificial intelligence.

Advertisement

Google plans to⁤ utilize Adobe’s Content Credentials project, which allows creators to ‌add a small “CR” symbol ‌to AI-generated media. This symbol ​will serve as metadata, providing viewers ⁤with information about the editing process, including‍ when, where, and how the media was altered.

The “CR” symbol ⁤will ‍enable viewers to verify the authenticity of ​videos, ⁢images, audio, ‌and documents ⁣by providing them with essential context about‌ AI editing. However, not everyone supports the idea of enforcing⁣ such measures ‌on all⁢ content. ⁢The Coalition for Content Provenance and Authenticity (C2PA) has proposed an alternative approach, suggesting that social media platforms and news organizations should share trusted digital media.

“The way⁤ we think we’re‍ trying to solve⁣ the problem is ‍first, we want to have ‍you have the ability to prove as a creator‍ what’s true,” said ⁢Dana Rao, leader of Adobe’s legal,​ security, and policy organization and co-founder of the coalition. “And then ‌we ⁢want to ‌teach people that if somebody​ is trying to tell you something that is true, they will have ​gone through this ‌process⁤ and you’ll see ⁣the ‘CR,’⁢ almost like a ‘Good Housekeeping’ seal of approval.”

The rise of AI‌ technology has brought both innovative ideas and challenges, such as disinformation ‍and sexual⁢ abuse. As‍ a⁤ result, there have ⁤been calls to regulate the technology or establish clearer indicators of AI-generated content. One proposed solution is watermarking, which adds signals to distinguish between ‍real and fake ‌media.

Meanwhile, Google has ​been actively developing various AI consumer ‍products, including Bard, an AI chatbot, and AI editing tools.

“At​ Google, a⁢ critical part of our‌ responsible approach to AI involves​ working with‌ others in‌ the industry to ⁣help increase transparency around digital content,” said Laurie Richardson, vice president of trust and safety at Google, in a press release about⁢ Google joining the C2PA.

“This is why we​ are‌ excited to join the committee and incorporate the latest version of the C2PA standard. It builds on our work in⁤ this space ⁢— including Google DeepMind’s SynthID, Search’s About this Image, and YouTube’s labels denoting content that is altered or⁢ synthetic — to provide ‍important context to people, helping them make more informed decisions.”

However, the advancements in AI technology have also led to negative⁤ consequences. Non-consensual sexually explicit “deepfake” images of ‌celebrities can be found on search engines like Microsoft and Google.‍ “Deepfake” refers to AI-edited photos and videos that manipulate faces and voices.

Stay‌ informed! Subscribe to receive breaking news blasts directly to your inbox for free. Subscribe here.

Share this post!

Missouri Attorney General‌ candidate Will‌ Scharf sheds light ⁢on⁤ why ⁣he believes Democrats‌ are ​encouraging illegal ‌aliens to cross the border as⁣ part of their election strategy.

A⁢ bipartisan group of ⁣lawmakers​ from the House and Senate is demanding the immediate release of ⁤Israeli ‌hostages.

California‍ Congressman Darrell​ Issa is ⁤taking action against U.S. funding of the U.N. refugee​ aid agency ‌UNRWA, which he claims has connections to Hamas sympathizers.

Join Ted Nugent for an exclusive interview.

Google introduces paid reasoning capabilities for consumers, competing with Microsoft in​ the subscription market.

Apple is developing prototypes of clamshell-style iPhones that fold widthwise.

Tesla cancels biannual performance ‍reviews and instead sends out⁤ single-line queries for each job.

Meta Platforms will soon start identifying and labeling images generated by other companies’ AI services.

rnrn

What role does Adobe’s Content Credentials⁢ project play in the identification of manipulated​ media?

E”‌ videos, for example, continue to​ be a major concern. Deepfakes are manipulated media that use artificial intelligence to ‌superimpose one ​person’s face onto ‌another’s ‌body in a realistic and often deceptive way.

To ‍combat this issue, Google⁣ is collaborating⁣ with other ⁣industry leaders, such as⁤ Adobe, ​Intel, ​and ⁢Microsoft, to develop ⁣a solution for identifying AI-altered media. The goal of this collaboration is to create a tool that can determine when a​ piece of ⁤media has been manipulated using artificial⁣ intelligence.

Google plans to utilize Adobe’s Content Credentials project, which‍ enables creators to add⁢ a small “CR” symbol ‌to AI-generated media. This symbol will serve as metadata, providing viewers with valuable information about the editing process, including when, ⁢where, ‍and how the media was altered. By having this⁣ information readily available, viewers ‍will be able to verify the ⁤authenticity of videos, images, audio, and documents.

However, not everyone agrees that enforcing such measures on all content is the best​ approach. The⁣ Coalition for Content Provenance and Authenticity (C2PA) ​suggests an ‍alternative solution. They ⁤propose that ​social media platforms ⁤and news organizations‍ should share trusted digital media to combat disinformation.

One possible solution to distinguish between real ​and fake media is watermarking. ⁤Watermarking involves adding ⁢unique signals or marks to media, making it easier to identify⁣ whether it has been manipulated​ using AI or other techniques.

Google’s involvement in tackling ⁣the⁢ issue of AI-altered media ⁢reflects⁣ its commitment to responsible AI practices. Laurie​ Richardson, vice⁣ president‌ of trust and safety at Google, stated in a press⁣ release, ⁣”At ⁤Google, a critical ⁢part of our responsible approach to⁢ AI involves working with others in the industry to help increase⁣ transparency around digital content.” Google has‍ previously developed⁤ AI consumer products, including the AI⁤ chatbot Bard and AI editing tools.

Although ‍AI technology has ⁤the ‌potential for tremendous innovation, it also presents challenges that need to be addressed. The rise of deepfakes and other forms of AI-generated content has raised ‌concerns about disinformation and abuse. By collaborating with industry leaders and actively working on solutions, Google aims to increase ‌transparency and provide viewers with essential context, enabling them ​to make more informed decisions.

In conclusion, the collaboration ⁤between Google, Adobe, Intel, ‌and Microsoft to develop ⁣a solution for identifying AI-altered media represents a significant step in addressing ⁤the challenges posed by ⁤manipulated media. By adding metadata and watermarking, viewers will have the ‍tools to verify the authenticity of digital content, ultimately increasing transparency and trust in the⁣ digital landscape.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
*As an Amazon Associate I earn from qualifying purchases

Related Articles

Sponsored Content
Back to top button
Available for Amazon Prime
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker