Dev

How to tell an AI bot wrote that scammy-looking tax email: No spelling mistakes


A bipartisan group of US senators has urged the Internal Revenue Service (IRS) to “use all the tools at its disposal to counter AI-generated tax scams,” and protect American taxpayers from money-grubbing crooks using chatbots and deepfakes to dupe marks.

In a letter this week to IRS commissioner Danny Werfel, senators Maggie Hassan, (D-NH), Ron Wyden (D-OR), Chuck Grassley (R-IA) and James Lankford (R-OK) sounded the alarm on this “emerging threat to taxpayers” – specifically families, seniors, and small businesses. And they want to know how the IRS is preparing to counter these scams by the end of the month.

This letter may have been more helpful had it been sent a little earlier. America’s tax day was April 18, though people can file after if they get a deadline extension.

“In previous tax filing seasons, many scam messages could be identified by spelling mistakes, grammatical errors, and inaccurate references to the tax code,” the senators wrote [PDF]. “By contrast, tax scams generated by new AI tools are professionally composed and specifically tailored to trick vulnerable taxpayers.”

The senators also cite “recent reporting” by Politico in which “one cybersecurity expert demonstrated how ChatGPT can be used to generate scam messages from the IRS.” The expert is Sergey Shykevich, a threat intelligence manager at Check Point Research, who Politico says “prompted ChatGPT to produce an example of a tax scam email containing malware.”

In response, “the AI chatbot spit back a grammatically immaculate email on the Employee Retention Credit that asked the recipient for their Employer Identification Number, payroll information and a list of their employees and their social security numbers.” Such messages could be used to fool people into handing over their details to fraudsters.

Tax scams generated by new AI tools are professionally composed and specifically tailored to trick vulnerable taxpayers

However, it doesn’t appear that ChatGPT wrote any malware. We’ve asked Check Point to clarify this point, and will update when we hear back from the security firm. Check Point has previously claimed AI can write “an entire attack chain,” which would need some significant human prompts to jump the OpenAI guardrails.

While the phishing messages and tax-scam calls written by the chatbot do look like they’d be effective in tricking some taxpayers into handing over their bank account information to a spoofed website or paying a fake tax penalty over the phone, we should note that scammers have been profiting from these schemes for decades — without the help of AI assistants — according to the IRS.

Scammers use the regular mail, telephone, or email …

“Thousands of people have lost millions of dollars and their personal information to tax scams,” the agency warned. “Scammers use the regular mail, telephone, or email to set up individuals, businesses, payroll and tax professionals.”

And while miscreants can use AI tools to help with all of the above, as security researchers have noted, these chatbots sometimes break a sentence halfway through, or just make stuff up, which could give the game away to marks. In other words: human editors will still be needed to craft or at least proofread text, malicious or otherwise — for now.

Additionally, we’ve seen reports about AI systems writing snippets of code, or providing tutorials to save developers time and cut some corners in malware development, though the consensus seems to be that purely ChatGPT-developed malware or exploits aren’t a reality yet.

When LLM-crafted swindles or computer viruses emerge, the senators want to make sure the IRS is ready to battle the bots.

In their letter, they’ve asked the agency to answer several questions about how it’s “preparing for a potentially significant increase” in AI-generated scams and how it plans to educate taxpayers and preparers about these. The senators also want to know whether the IRS has received any reports to date of AI scams and, if so, how much did they cost in monetary losses.

We decided to pose the question to the chatbot itself. Here’s what ChatGPT told us:

“As an AI language model, I do not have any knowledge or control over how individuals use the information I provide, including whether they use it for fraudulent activities such as tax scams. However, I would like to make it clear that such activities are illegal and unethical.”

It continued: “It is important to be aware of such scams and protect yourself by being cautious about sharing personal and financial information with unknown entities or individuals. If you suspect that you may have been a victim of a tax scam, you should report it to the appropriate authorities immediately.” ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.