This is the last post in my 3-part series, where I discuss the risks of AI hallucinations. This week we will take a closer look at how threat actors are exploiting the popularity of AI and the users’ blind trust.
The question individuals and organizations, downloading free publicly available AI tools and AI-generated code, should be asking themselves – are the risks worth the reward?
According to Wikipedia,
“AI slop, often simply "slop", is a term for low-quality media, including writing and images, made using generative artificial intelligence technology.”
What seems like innocuous and annoying AI-generated spam hides some significant risks. The widespread push to use generative AI tools for coding is leading to growing threat of “slopsquatting.”
“Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user’s mistake, as in typosquats, threat actors rely on an AI model’s mistake.”
A new study found that code generated by AI-generated code is also more likely to contain fabricated information that can be used to trick software into interacting with malicious code.
“If a user trusts the LLM's output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user's system.”
Researchers found that hallucinations are repeated in multiple queries, which opens up opportunities for threat actors to exploit vulnerabilities.
“58 percent of the time, a hallucinated package is repeated more than once in 10 iterations, which shows that the majority of hallucinations are not simply random errors but a repeatable phenomenon that persists across multiple iterations. This is significant, because a persistent hallucination is more valuable for malicious actors looking to exploit this vulnerability and makes the hallucination attack vector a more viable threat.”
The attackers exploit these patterns to publish malware, which is then accessed and downloaded by large groups of unsuspecting developers.
Alex Birsan highlighted the risk posed by blind trust inherent in the common practice of downloading code packages from public repositories in his blog post back in 2021.
“Some programming languages, like Python, come with an easy, more or less official method of installing dependencies for your projects. These installers are usually tied to public code repositories where anyone can freely upload code packages for others to use.
When downloading and using a package from any of these sources, you are essentially trusting its publisher to run code on your machine.”
HP’s threat research report in 2024 found that the threats are getting more pervasive and sophisticated.
“HP threat researchers identified a campaign targeting French-speakers using malware believed to have been written with the help of GenAI. The malware’s structure, comments explaining each line of code, and native language function names and variables all indicate the threat actor used GenAI to create the malware. The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.”
However, these threats are not confined to code and text but also include images generated by AI.
“the HP Threat Research team identified a campaign notable for spreading malware through Scalable Vector Graphics (SVG). Widely used in graphic design, the SVG format is based on XML and supports lots of features, including scripting. The attackers abused the format’s scripting feature by embedding malicious JavaScript inside images (T1027.009), ultimately leading to multiple information stealers trying to infect the victim’s endpoint.”
Hacker news reports that cybercriminals are targeting AI users with malware-loaded installers posing as popular tools.
“The cybersecurity company (Cisco Talos) said the legitimate versions of the AI tools are popular in the business-to-business (B2B) sales domain and the marketing sector, suggesting that individuals and organizations in these industries are the primary focus of the threat actors behind the campaign.”
As threat actors continuously find new ways to exploit the popularity of AI tools, organizations are also having to beef up their security protocols to protect against external threats as well as internal employee usage of untested and unverified AI tools and AI-generated code.
Reacting to threats is not enough as mitigating these requires a more proactive approach by organizations and governments. There is an urgent need for investment in AI literacy programs that raise awareness of these risks and train users on how to minimize their exposure from high-risk activities like downloading and blindly trusting publicly sourced data and AI tools. Anything less is simply unacceptable.
Here is a shortlist of research papers and reports that dive deeper into this issue.