In an alarming trend, scammers are increasingly leveraging artificial intelligence (AI) to create highly convincing deepfake videos and doctored images to deceive consumers. One prominent victim of this fraudulent practice is CNN’s Chief Medical Correspondent, Dr. Sanjay Gupta, who recently spoke out against unauthorized AI-generated advertisements that misuse his likeness to promote fake health products and cures. This article explores the details of these scams, their implications for public trust, and the broader challenge of combating AI-driven fraud, as reported by multiple sources.

The Emergence of Deepfake Scams

On August 1, 2025, CNN published reports highlighting Dr. Sanjay Gupta’s public denouncement of scammers exploiting his image and voice through AI deepfake technology. These fraudulent ads, which surfaced across social media platforms, falsely depict Dr. Gupta endorsing dubious health products, including unverified supplements and treatments for conditions like Alzheimer’s and diabetes. The ads use sophisticated AI tools to manipulate video footage and audio, creating convincing but entirely fabricated endorsements. Dr. Gupta emphasized, “That’s not me. Using my face in such fake videos is unauthorized and misleading to people.”

The misuse of Dr. Gupta’s likeness is part of a broader wave of AI-driven scams targeting public figures, particularly those with established credibility, such as journalists and medical professionals. The scam involving Dr. Gupta follows a pattern seen in earlier incidents, such as a 2022 fake CNN article falsely claiming he endorsed CBD gummies, which was debunked by FactCheck.org and CNN spokespersons. These scams often rely on cloaking techniques, making it difficult for researchers to track, as fraudulent content may only appear to specific users or from certain sources.

The Mechanics of the Scam

The deepfake ads featuring Dr. Gupta are designed to exploit his trusted reputation as a neurosurgeon and CNN’s medical expert. For instance, one scam, reported by jordanliles.com on July 13, 2025, promoted a product called “MemoMaster,” falsely claiming it was a breakthrough cure for Alzheimer’s and memory loss. The ads used deepfake visuals and AI-generated voices to simulate endorsements from Dr. Gupta and other CNN personalities like Anderson Cooper. These fabricated clips were paired with emotional narratives about dementia, luring vulnerable consumers into purchasing unproven products. The scam also included fake media endorsements, claiming coverage from major outlets like The New York Times, none of which were legitimate.

Scammers employ advanced AI tools to create these deceptive ads, including voice cloning and facial manipulation software like Deep-Live-Cam, which allows even non-experts to produce convincing deepfakes. These tools sync lip movements with AI-generated audio, making it difficult for unsuspecting viewers to discern the fraud. The ads often direct users to fake websites mimicking legitimate news outlets, such as CNN, to further bolster credibility. In Dr. Gupta’s case, clicking on links in these ads led to sites selling counterfeit health products, sometimes resulting in financial losses or stolen credit card information.

Impact on Victims and Public Trust

The consequences of these deepfake scams extend beyond financial loss. Consumers, particularly those seeking solutions for serious health conditions, may forego legitimate medical treatments in favor of unproven products, posing significant health risks. For instance, a scam reported in Australia involved a fake endorsement by a professor promoting a dietary supplement, misleading diabetic patients into believing it could replace standard treatments like metformin. Similarly, the MemoMaster scam targeting Dr. Gupta’s likeness preyed on individuals desperate for Alzheimer’s solutions, potentially delaying proper care.

For public figures like Dr. Gupta, these scams damage their reputation and erode public trust in their professional credibility. The unauthorized use of their likeness in fraudulent schemes can mislead audiences who rely on their expertise for accurate health information. Dr. Gupta has called for greater oversight by technology companies to curb the spread of such content, urging consumers to verify information from official sources before acting on advertisements.

Broader Implications and Industry Response

The rise of AI deepfake scams is a global issue, with similar incidents reported worldwide. In India, a 79-year-old woman lost nearly Rs 35 lakh to a scam using deepfake videos of NR Narayana Murthy, while in Hong Kong, scammers impersonating a company’s CFO via deepfake video stole over $25 million. Celebrities like Jennifer Lopez, Steve Harvey, and Scarlett Johansson have also been targeted, prompting calls for legislative action, such as the No Fakes Act in the U.S., which aims to penalize creators and platforms hosting unauthorized AI-generated content.

Technology companies are beginning to respond. Platforms like TikTok now require creators to label AI-generated content, and companies like Vermillio AI offer tools like TraceID to track and remove deepfakes. However, the rapid proliferation of deepfake content—estimated at a million instances per minute—poses a significant challenge. Critics argue that social media platforms, which profit heavily from advertising, have little incentive to rigorously monitor scam ads, as highlighted in a class-action lawsuit against Meta in Israel involving similar deepfake scams.

Combating Deepfake Scams

Experts suggest several strategies to combat deepfake scams. Consumers should look for telltale signs of manipulation, such as unnatural eye movements, inconsistent facial expressions, or audio-visual desynchronization. Tools are also emerging to detect deepfakes, though their effectiveness varies. On a systemic level, proposed solutions like Personhood Credentials aim to verify human identities online, but they raise privacy concerns. Dr. Gupta and other advocates emphasize the importance of public education and awareness to prevent falling victim to these scams.

Conclusion

The misuse of Dr. Sanjay Gupta’s likeness in AI deepfake scams underscores the growing threat of AI-driven fraud in the digital age. These scams not only exploit vulnerable consumers but also challenge the integrity of trusted public figures and media outlets. As AI technology advances, the need for robust legislative, technological, and educational measures becomes increasingly urgent to protect consumers and preserve trust in digital content. Dr. Gupta’s call to action serves as a reminder to remain vigilant and verify the authenticity of online advertisements, particularly those involving health products.

This article is published for www.nriglobe.com to raise awareness about the dangers of AI deepfake scams and the importance of verifying information from trusted sources.

Leave a Reply

Your email address will not be published. Required fields are marked *