Deepfake: when audio forged by social engineering becomes a global threat - SearchInform

Deepfake: when audio forged by social engineering becomes a global threat

09.12.2019

Back to blog list

According to Gartner, by 2022 30% of all the cyberattacks could be targeted at damaging data, which is the material for neural network algorithm manipulations, as well as at the theft of ready-made machine learning examples. Today it seems to be still in the future, but this year social engineering with the help of deepfakes has already become real. Alexey Parfentiev, Leading Analyst at SearchInform, elaborates on how it works and gives examples of such attacks.

Attacks based on neural network and deepfake (deep learning + fake) technology usage are not a sci fi gimmick or concept. Neural network has learned to place faces on photos and videos which looks precisely fitting and realistic. Together with the similar and well-implemented technology of speech synthesis (machine learning based on a voice sample) it becomes a problem as fake videos can be used to conduct information wars and commit cybercrime.

There are speech synthesis open source systems, and cybercrime can become massive in the near future. Even tomorrow there can come an email or a message from an unknown phone number but containing a video or voice of close friends in which they ask to send them money to a new number or credit card. And you will not think twice because you hear that it’s your friends. The only limitation deepfake makers face is the necessity to collect and to tune samples.

Facebook, Microsoft, MIT, University of California in Berkeley, University in Oxford and other organisations have announced competition which will throw together developers of revelation mechanisms for deepfake technology. The irony is that global companies can make scams occur even more likely as deepfake learns and eliminates wrong and unrealistic options. If an extra revelation technique is appropriated, the AI-based technology can apply it: the more we succeed in revealing deepfake, the more accurate this technology will get.

As for the attack examples, in September we became aware of the first deepfake scam. The offenders didn’t forge an image, the incident occurred due to a voice changer. A German colleague called to a CEO of a British company and asked him to transfer $243,000 to a third party. The quality of a deepfake voice was so good that a scammer didn’t seem to evoke any questions or cause suspicion. This case raises legitimate concerns about the use of biometrics authentication for financial transactions which have been recently promoted by banks. However, the banking sector stays alerted to the latest trends and takes them into consideration. For example, in the latest version of Sberbank’s application, by default biometrics got disabled – the bank decided that a time-tested digital code would be enough.

So far it has been the only example, but it is likely that some are not known yet. Popular fake videos which we became aware of didn’t threaten people’s security. They were aimed to discredit popular celebrities (intimate videos with Scarlett Johansson and Natalie Portman) or demonstrate possibilities of the technology (for example, a piece of The Matrix movie starring Will Smith instead of Keanu Reeves).

There haven’t been any attacks on neural networks, and there is no such a technology, but their appearance has been forecast by experts. Researchers from Gartner think that by 2022 30% of all cyberattacks will be aimed at data damaging, thus affecting information based on which neural network is learning, as well as at the theft of already existing machine learning models. Unmanned vehicles will start to take pedestrians for something else, for example, some objects. That’s when it will be not about financial or reputational risks, but about life and health of people.


Machine learning Fraud Investigation

Subscribe to get helpful articles and white papers. We discuss industry trends and give advice on how to deal with data leaks and cyberincidents.