AI Crime Threat Report Ranks Fake Audio or Video as Most Dangerous
A new report compiled by experts at the University College London (UCL) on AI crime threat has ranked fake audio or video content as the most concerning use of artificial intelligence (AI) technology in terms of its possible applications for crime or terrorism.
Funded by the university’s Dawes Centre for Future Crime and published in Crime Science, the study identified 20 ways in which AI can be used to carry out crime in the next 15 years. UCL said that these were ranked on basis of the concern they caused, which was determined on the harm they could inflict, the possibility for criminal profit or gain, how easy they can be executed, and how tough they would be to prevent.
UCL Computer Science’s Lewis Griffin, who is the senior author of the AI crime threat report, said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
Authors of the AI crime threat report believe that fake content will be hard to identify and stop, and that it could have a range of objectives, from discrediting a public figure to demanding money by impersonating a couple’s son or daughter in a video call. <!–
UCL said that such content could result in a widespread distrust of audio and visual evidence, which will be a societal harm.
The researchers also found that five other AI-powered crimes are of high concern. These involve the use of autnomous vehicles as weapons, crafting more customized phishing messages, causing disruption to AI-controlled systems, putting up online details to carry out large-scale blackmail, and generating AI-authored fake news.
As per the AI crime threat report, medium concern crimes included selling items and services labelled fraudulently as “AI”. On the other hand, low concern crimes are the use of small robots for entering into properties via access points like cat flaps or letterboxes.
UCL Computer Science’s Matthew Caldwell said: “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”