Interview Now that criminals have realized there’s no need to train their own LLMs for any nefarious purposes – it’s much cheaper and easier to steal credentials and then jailbreak existing ones – the threat of a large-scale supply chain attack using generative AI becomes more real.
No, we’re not talking about a fully AI-generated attack from the initial access to the business operations shutdown. Technologically, the criminals aren’t there yet. But one thing LLMs are getting very good at is assisting in social engineering campaigns.
And this is why Crystal Morin, former intelligence analyst for the US Air Force and cybersecurity strategist at Sysdig, anticipates seeing highly successful supply chain attacks in 2025 that originated with an LLM-generated spear phish.
When it comes to using LLMs, “threat actors are learning and understanding and gaining the lay of the land just the same as we are,” Morin told The Register. “We’re in a footrace right now. It’s machine against machine.”
Sysdig, along with other researchers, in 2024 documented an uptick in criminals using stolen cloud credentials to access LLMs. In May, the container security firm documented attackers targeting Anthropic’s Claude LLM model.
While they could have exploited this access to extract LLM training data, their primary goal in this type of attack appeared to be selling access to other criminals. This left the cloud account owner footing the bill — at the hefty price of $46,000 per day related to LLM consumption costs.
Digging deeper, the researchers discovered that the broader script used in the attack could check credentials for 10 different AI services: AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.
We’re in a footrace right now. It’s machine against machine
Later in the year, Sysdig spotted attackers attempting to use stolen credentials to enable LLMs.
The threat research team calls any attempt to illegally obtain access to a model “LLMjacking,” and in September reported that these types of attacks were “on the rise, with a 10x increase in LLM requests during the month of July and 2x the amount of unique IP addresses engaging in these attacks over the first half of 2024.”
Not only does this cost victims a significant amount of money, according to Sysdig, but this can run more than $100,000 per day when the victim org is using newer models like Claude 3 Opus.
Plus, victims are forced to pay for people and technology to stop these attacks. There’s also a risk of enterprise LLMs being weaponized, leading to further potential costs.
2025: The year of LLM phishing?
In 2025, “the greatest concern is with spear phishing and social engineering,” Morin said. “There’s endless ways to get access to an LLM, and they can use this GenAI to craft unique, tailored messages to the individuals that they’re targeting based on who your employer is, your shopping preferences, the bank that you use, the region that you live in, restaurants and things like that in the area.”
In addition to helping attackers overcome language barriers, this can make messages sent via email or social media messaging apps appear even more convincing because they are expressly crafted for the individual victims.
“They’re going to send you a message from this restaurant that’s right down the street, or popular in your town, hoping that you’ll click on it,” Morin added. “So that will enable their success quite a bit. That’s how a lot of successful breaches happen. It’s just the person-on-person initial access.”
She pointed to the Change Healthcare ransomware attack – for which, we should make very clear, there is no evidence suggesting it was assisted by an LLM – as an example of one of 2024’s hugely damaging breaches.
In this case, a ransomware crew locked up Change Healthcare’s systems, disrupting thousands of pharmacies and hospitals across the US and accessing private data belonging to around 100 million people. It took the healthcare payments giant nine months to restore its clearinghouse services following the attack.
It will be a very small, simple portion of the attack chain with potentially massive impact
“Going back to spear phishing: imagine an employee of Change Healthcare receiving an email and clicking on a link,” Morin said. “Now the attacker has access to their credentials, or access to that environment, and the attacker can get in and move laterally.”
When and if we see this type of GenAI assist, “it will be a very small, simple portion of the attack chain with potentially massive impact,” she added.
While startups and existing companies are releasing security tools and that also use AI to detect and prevent email phishes, there are some really simple steps that everyone can take to avoid falling for any type of phishing attempt. “Just be careful what you click,” Morin advised.
Think before you click
Also: pay close attention to the email sender. “It doesn’t matter how good the body of the email might be. Did you look at the email address and it’s some crazy string of characters or some weird address like name@gmail but it says it’s coming from Verizon? That doesn’t make sense,” she added.
LLMs can also help criminals craft a domain with different alphanumerics based on legitimate, well-known company names, and they can use various prompts to make the sender look more believable.
Even voice-call phishing will likely become harder to distinguish because of AI used for voice cloning, Morin believes.
“I get, like, five spam calls a day from all over the country and I just ignore them because my phone tells me it’s spam,” she noted.
“But they use voice cloning now, too,” Morin continued. “And most of the time when people answer your phone, especially if you’re driving or something, you’re not actively listening, or you’re multitasking, and you might not catch that this is a voice clone – especially if it sounds like someone that’s familiar, or what they’re saying is believable, and they really do sound like they’re from your bank.”
We saw a preview of this during the run-up to the 2024 US presidential election, when AI-generated robocalls impersonating President Biden urged voters not to participate in the state’s presidential primary election.
Since then, the FTC issued a $25,000 reward to solicit ideas on the best ways to combat AI voice cloning and the FCC declared AI-generated robocalls to be illegal.
Morin doesn’t expect this to be a deterrent to criminals.
“If there’s a will, there’s a way,” she opined. “If it costs money, then they’ll figure out a way to get it for free.” ®