AI helps solve privacy issues that it himself created

The incredible successes of artificial intelligence would not have been achieved without the availability of a huge amount of data. And the proliferation of AI in-vehicle marketing and self-service has contributed to the collection of more and more data. A wide range of information, sometimes confidential, is collected in huge databases. All this makes them attractive goals, increasing the risk of privacy violations, reports The Conversation.

The proliferation of AI raises a number of privacy issues that people may not even be aware of. On the other hand, it can help fix the privacy issue, cybersecurity experts Zhuyuan Chen and Arya Gangopadhyai assure.

The risks of secrecy from AI arise not only due to the massive collection of personal data but also from the models of deep neural networks themselves, which provide most of the modern artificial intelligence. Data is vulnerable not only because of attacks on databases but also because of “leaks” in the models on which they were trained.

Deep neural networks, which are a set of algorithms designed to determine patterns in data, consist of many layers. In these layers, there are a large number of nodes called neurons, and neurons from neighboring layers are interconnected with them. Each node, as well as links between them, encode certain bits of information. These bits of information are created when a special process scans large amounts of data to train the model.

For example, a face recognition algorithm can be trained on a series of selfies so that it can more accurately predict a person’s gender. Such models are very accurate, but they can also store too much information – actually remembering certain people in the learning process. Attackers can identify people from training data by exploring deep neural networks that classify gender on the face depicted.

One of the defense methods that machine learning experts came up with was adding uncertainty to the AI ​​model. This was done so that attackers could not accurately predict what the model would do. Will it scan a specific sequence of data? Or will he launch the sandbox? Ideally, the malware will not know and unwittingly expose its motives.

Another way to improve AI privacy is to investigate the vulnerabilities of deep neural networks. No algorithm is perfect, and these models are vulnerable. The reason is that they are very sensitive to small changes in the data they read.

These vulnerabilities can be used to improve privacy by adding “noise” to personal data. For example, researchers at the Planck Institute for Informatics in Germany have developed ways to convert Flickr images to face recognition software. The changes are incredibly subtle, so much so that they cannot be detected by the human eye.

Another way that AI can help alleviate information security issues is to keep data confidential when building models. One promising development is called federated learning. That is what Google uses in its smart keyboard, Gboard, to predict which word to enter next. Federated learning builds the ultimate deep neural network from data stored on many different devices, rather than in one central data warehouse. Its main advantage is that the source data never leaves local devices. Thus, privacy is still somewhat protected. Yes, this is not an ideal solution. Although local devices perform some calculations, they do not finish them. Intermediate results may reveal some data about the device and its user.

Federated learning provides insights into the future in which AI is more respectful of privacy. Ongoing research may find more ways in which AI will become part of the solution, rather than a source of privacy concerns.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Author: John Kessler
Graduated From the Massachusetts Institute of Technology. Previously, worked in various little-known media. Currently is an expert, editor and developer of Free News.
Function: Director
131 number 0.180329 time