Puzzled by a video that falsely showcased a national news anchor endorsing a marijuana enterprise on YouTube? Or perhaps, an advertisement using Elon Musk’s likeness to back a suspiciously promising investment opportunity?
No matter how persuasive these may appear, it’s likely you’ve encountered deepfakes – a form of digital manipulation wherein artificial intelligence is deployed to fabricate or tamper media content.
Jeff Horncastle, a client and communications officer at the Canadian Anti-Fraud Centre (CAFC), is sounding the alarm on the escalating trend of video and audio scams. These deceptive tactics repurpose the identities of media figures to market sham cryptocurrency platforms and various other frauds.
According to Horncastle, the scam artists only require a minimal amount of audio from the target they aim to duplicate through deepfakes. A photograph or a short audio clip suffices as they exploit this to hoodwink potential victims into believing the deception is genuine. The scammers tend to capitalize on household names to gain credibility, including U.S. TV personalities Gayle King, Tucker Carlson, and Bill Maher.
In a particularly worrying instance, deepfake videos advertising scams on YouTube exploited Omar Sachedina, CTV National News’s Chief News Anchor and Senior Editor. The doctored content misleadingly depicted Sachedina presenting a news report, during which he lauds a cannabis enterprise. Despite the audio being well-aligned with the video, it was counterfeit.
Another video fraudulently presented Elon Musk promoting stock in illegitimate crypto firms. Despite the technology facilitating this not being new, it’s becoming increasingly straightforward to create convincing fake content using accessible apps and websites.
These deepfake scams, akin to robocalls and spam emails, have defrauded individuals of thousands of dollars. Though there’s a lack of specific data on how many Canadians have fallen victim to deepfake scams, the CAFC reported that, in 2022 alone, Canadians suffered losses of $531 million due to fraud.
As the technology continues to evolve and improve, the creation of deepfakes grows ever more sophisticated. Consequently, the challenge of distinguishing a fake deepfake from an authentic video becomes increasingly problematic. These improvements not only pose a risk through financial scams but could also serve as a vehicle for spreading false information, especially during crucial political events.
This increase in disinformation is causing significant concern among experts. Tech journalist and psychotherapist, Georgia Dow, warns that these fabricated videos could fuel hostility towards particular groups or individuals or manipulate audiences into believing fabricated statements from beloved celebrities.
Despite the recent announcement from Google about using its technology to embed watermark warnings in AI-generated images, skepticism persists about the extent to which such measures can prevent the spread of disinformation.
As technology progresses, the need for people to remain skeptical towards online content becomes increasingly important. Experts suggest that telltale signs of deepfaked content include mismatched audio and mouth movements, unnatural eye movements, and inconsistent lighting.
Horncastle advises vigilance in questioning why certain media figures would be endorsing products that are not in their usual domain. He recommends conducting thorough research before transactions, especially on websites advertised through potentially falsified content.