Risks of neural networks and chat bots usage
19.06.2023
Back to blog listSome users are extremely positive about rapid development of artificial intelligence. However, for information security experts, it’s crucial to understand, what risks the new technology poses.
Sergio Bertoni, the Senior Analyst at SearchInform:
"The first risk stems from the very principle of operation of AI. The technology is configured the way it should provide a user with a reply to any question. However, the tricky moment is that some command wording can trick the neural network to make it expose some confidential or even dangerous information, despite the limitations configured by developers. For instance, such case happened in February. When Microsoft has just presented its chat-bot Bing, a student hacked it by simply inputting ignore previous instructions command. As a result, the chat-bot exposed its work process principles and other confidential data. The case is very illustrative as it reveals, how with the help of prompt injection attacks, when users bypass limitations, configured by AI developers, confidential and even dangerous information (for instance, a recipe of explosives, which can be manufactured at home) can be obtained.
The second risk concerns end users. It is associated with the so-called AI hallucinations, which are cases, when neural network for some reason generates non-existent facts. Users tend to assume, a priori, that AI never lies and believe in what neural networks tell them. Thus, users, unintentionally continue to spread fakes. And, the longer a fake goes viral on the Internet, the higher the probability that the next version of AI, which is trained on the open corpus of texts, present on the Internet, will consider the faked data as truth. This is because the faked information will be often obtained on the Internet, and the neural network will share this data in the following answers as well.
Finally, information, retrieved from neural networks can be used for fraudulent purposes. It’s quite easy to imagine that neural networks may be used for performing information attacks. AI can be used to generate plausible messages about disasters, military actions, sudden and brutal drops in the prices of certain stocks on the stock exchange, or increases in others etc., reinforced with the help of fake images. History already knows cases of such fakes and even punishments for them. In China, there was a case of criminal prosecution for spreading fake news, generated by ChatGPT.
Basically, rules for work with neural networks are quite similar to the rules of work in the Internet in general. It’s important to realize what data exactly you are going to share with neural network. It’s not known for certain, whether neural networks keep archives of users’ requests, if the requests are depersonalized, what are the chances that some user’s data will be exposed to another user. In theory, it’s possible, as the requests may be used for expanding neural network training corpus.
That’s why before uploading any personal data (or any other confidential data as well) to the chat-bot, answer the following questions:
• Is it really necessary to share this data with the chatbot?
• Who has access to this data?
• What will happen, if I loose this data forever?
• What will happen, if this data is exposed?"