According to a new report from University College London (UCL), fake audio or video content is the most troubling part of the use of artificial intelligence in terms of its potential use in crime or terrorism. The study, published in the journal Crime Science, identified 20 ways AI can be used to promote crime over the next 15 years. They were ranked in order of danger – based on the harm they can cause, the potential for criminal gain, or benefit. As well as how easily fraudulent schemes will be carried out and how difficult it will be to stop them.
Experts claim that fake content will be difficult to detect and stop and that its use can serve a variety of purposes – from discrediting a public figure to fraudulent schemes with the aim of luring funds, for example, impersonating the son or daughter of a married couple during a video call. Such content, they said, could lead to widespread distrust of audio and visual data, which in itself would be harmful to society.
In addition to fake content, scientists provided the top of five other AI crimes that were deemed “of serious concern.” We are talking about the possibility of using vehicles without a driver as a weapon, helping to create better phishing messages. And also about the disruption of the systems controlled by AI, in order to search and collect information on the Internet for large-scale blackmail. In addition, experts are concerned about the possibility of creating fake news using AI systems.
As the capabilities of AI technologies expand, so does their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to determine what these threats might be and how they might affect our lives.
Professor Lewis Griffin (UCL Computer Science), senior report author
The researchers collected 20 AI crimes from scientific articles, news, and current affairs, as well as from fiction and popular culture. They then brought together 31 people with artificial intelligence expertise for two days of discussions to assess the severity of potential crimes. Participants were drawn from academia, the private sector, police, government, and public security agencies.
Crimes of medium concern included the sale of items and services, as well as security checks and targeted advertising.
Crimes of low concern included small robotic robbers who entered the facility using mailboxes or a cat door.