Deepfakes, Ransomware, and What They Mean for Enterprises

 

In September 2019, thieves used voice-mimicking software to defraud a UK energy company. By using technology to impersonate a company executive’s voice, they were able to dupe his subordinate into sending them 200,000 euros. Some researchers are calling this one of the world’s first publicly reported artificial-intelligence heists.

Deepfake frauds are a growing cybersecurity concern. Experts predict that bad actors will use increasingly sophisticated AI to improve fake video and audio for their own ends, whether to fool the public or defraud enterprises.

If weaponizing data from images and audio is the future, the age of weaponizing data through ransomware attacks is already upon us. We are already seeing criminals hold organizations, both public and private, hostage through preventing access to their data.

What can be done to combat these growing problems?

 

The age of the deepfake is upon us

The term “deepfakes” generally describes videos doctored with cutting-edge AI. Probably to no one’s surprise, the phenomenon began with R-rated intentions: in December 2017, under the name “deepfakes,” someone used A.I. technologies to graft the heads of celebrities onto nude bodies in R-rated videos.

The technology has improved at a rate rapid enough to surprise experts. There is little reason to believe it will slow as computers become more powerful, and the amount of data grows. Both of these factors mean that the machine-learning software that helps create deepfakes will become more effective.

AI can streamline the process and reduce the cost, time, and skill needed to manipulate photos and videos or even create fake images from scratch. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner. A computer science prof at the Technical University of Munich, Niessner is working with Google on its deepfake research. “That trend will continue for years.”

So far, the tech used to create deepfakes isn’t sophisticated enough to produce seamless video or audio. But the technology is advancing.

Researchers are building tools to detect bogus videos. Like deepfake creators, deepfake detectors learn by analyzing images. These tools are evolving, but some worry that they won’t be able to keep pace.

The question is: Which side will improve more quickly? And what can be done about it?

 

Using AI to detect deepfakes

The fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Fake videos don’t necessarily share characteristics, making them a challenge to identify.

Engineers at Dessa, a Canadian company specializing in artificial intelligence, recently tested a deepfake detector. Built using Google’s synthetic videos, the detector could identify the fakes with almost perfect accuracy. But when they tested the software on random deepfake videos culled from the internet, it failed more than 40 percent of the time.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

For detectors to improve, they require access to a constant stream of new data representing the latest deepfake techniques. The good guys are hampered in their efforts in ways that bad actors aren’t; for privacy and copyright reasons, companies cannot always share data with outsiders.

 

Preventing ransomware attacks

As with deepfakes tech, recent improvements in AI and machine learning (ML) in the past several years means improvements to both cybersecurity and also the ability to launch a ransomware attack.

The recent growth of such attacks across industries shows the need to understand the increased sophistication of ransomware, what procedures and systems can mitigate the risk, and what solutions are required for an adequate response.

“It’s imperative to implement multiple layers of preventative measures to mitigate potential incidents and ensure a reaction plan is in place if an attack occurs,” said Anthony Dolce, Vice President of the insurer Cyber Lead, Chubb North America Financial Lines Claims.

Measures include regularly backing up data files, securing those backups offline, properly educating employees, investing in state-of-the-art security and antivirus software, and purchasing a comprehensive cyber insurance policy, Dolce says.

There is little doubt that we are entering a new age of cyber-attacks and digital fraud. We can only hope that the technology to counter such measures keeps up with the ability to launch them.

 

Get a leader in security behind your business

Bell was featured in the October 2019 editions of Business Chief Magazine and Gigabit Magazine, profiling Dominique Gagnon, GM of Bell’s Cybersecurity Practice, and Gary Miller, Cybersecurity Strategist, and discusses the cybersecurity challenges organizations are facing, and offers a unique look into Bell’s customer-centric strategy.

As the only communications provider recognized by IDC as a Canadian leader in security four years in a row, Bell has the advanced threat detection and built-in network defences to help keep your business safe. Learn more.

 

Contact

Corporate Head Office
110, 220 12th Ave SW
Calgary, AB T2R 0E9

Phone: 1.403.538.4000
Email: info@axia.com

Sales Inquiries
Toll-Free: 1.866.773.3348
Canada: CANsales@axia.com