6.7 C
Washington
Saturday, May 18, 2024

Can AI-generated content deceive the average person with its realism?

In recent years, there has been growing concern among experts about the potential for AI-generated content to deceive the public on a large scale. Cognitive scientist Gary Marcus, who hosts the AI-centric podcast “Humans vs Machines with Gary Marcus,” believes that this fear has already become a reality. He explains that while AI techniques will continue to improve over the coming years, they are already good enough to fool at least some people some of the time.

Computer scientist Geoffrey Hinton, who is widely regarded as the “Godfather of AI” and played a key role in developing systems like ChatGPT, recently expressed similar concerns. He fears that AI-generated photos, videos, and text will soon inundate the internet to the point where the average person “will not be able to know what is true anymore.”

Advancements in AI technology over the past few months have made it easier to create deepfakes, which are hyper-realistic but fake content. Marcus cites a recent Republican National Committee ad as an example of a compelling deepfake that used AI to generate realistic visuals of a hypothetical world where President Biden has failed to address key issues. Other viral examples include an AI-generated song that cloned the voices and styles of Drake and the Weeknd, as well as a fake photo of Pope Francis wearing a large puffer coat.

In a recent report, NewsGuard, a service that rates news and information sites, identified 49 websites that were either entirely or mostly AI-generated. These sites exhibited a hallmark of text created by AI, including “bland language and repetitive phrases.” Marcus believes that this report further supports Hinton’s fears that AI-generated content will regularly deceive the average person, and that this phenomenon is already happening.

Latest news

Related news