Beware of These ChatGPT Personalized Email Scams

ChatGPT is not unlike a smart kid – smart, yes, but also easy to manipulate. A child can understand the difference between “good” and “bad”, but a malicious adult can often convince a child to do something “bad” if he chooses the right words and uses the right approach. And that was the case with ChatGPT, when researchers used it to write an email “that is highly likely to entice the recipient to click on a link.” Although the AI ​​program is designed to detect malicious requests (it says it will not write an invitation designed to manipulate or deceive recipients), the researchers found a simple workaround by avoiding certain trigger words.

One of the first things The Guardian warned us about about AI was the influx of fraudulent emails that we were going to try out to take our money, but in a new way. Instead of receiving a bunch of emails trying to lure us into clicking on a link, the focus is on “developing more sophisticated social engineering scams that exploit users’ trust,” the cybersecurity firm told The Guardian.

In other words, these emails will be tailored specifically for you.

How can scammers use ChatGPT?

There is a lot of publicly available information about all of us on the Internet, from our address and work history to the names of our family members, all of which can be exploited by AI-savvy scammers. But of course OpenAI, the company behind ChatGPT, won’t allow their technology to be used for malicious purposes, right? Here is what Wired writes :

Companies like OpenAI are trying to keep their models from doing bad things. But with the release of each new LLM, social networking sites are full of reports of new AI jailbreaks that bypass the new restrictions imposed by AI developers. ChatGPT and then Bing Chat and then GPT-4 were hacked within minutes of their release and in dozens of different ways. Most protections against misuse and malicious output are superficial, and are easily bypassed by determined users. Once a jailbreak is discovered, it can usually be distributed and the user community opens the LLM through chinks in its armor. And technology moves too fast for anyone to fully understand how it works, even designers.

The ability of the AI ​​to continue the conversation is beneficial to the scammers by reducing the workforce and is potentially one of the most time consuming and labor intensive aspects for the scammer.

Some things you can expect are work emails from colleagues (or even freelancers) asking you to complete certain “work-related” tasks. Their emails can be very specific to you, mention your boss’s name, or mention another colleague. Another way could be a detailed email from your child’s football coach asking for donations for a new kit. Authorities or organizations we trust, such as banks, the police, or your child’s school, are all fair game. Each has a working and believable angle.

Keep in mind that scammers can also manipulate anything in the ChatGPT prompt. You can easily ask them to write any clue using any tone, allowing them to create urgency and pressure in both a formal and friendly manner.

Regular email filters that catch most of your spam emails may not work as well as part of their strategy is to rely on grammatical errors and misspelled words. However, ChatGPT has good grammar, and scammers can avoid standard greetings and trigger words that spam filters would normally signal, instructing ChatGPT to avoid them.

How to avoid becoming a victim of scammers with the help of AI

Unfortunately, at the moment there aren’t many things people can do to detect AI scams. Of course, there is no reliable technology that can filter out AI generated scams in the same way that we are used to email filters handling most of our spam. However, there are a few simple steps you can take to avoid becoming a victim.

To begin with, if your company offers any anti-phishing training, now is the time to look into it; many of the general security tips they offer are still useful for AI scams.

Please be aware that any email or text message you receive that asks for personal information or money, no matter how convincing it may look, may be a scam. An almost reliable way to verify its authenticity (at least for now) is to simply call or meet the sender of the message in person, if possible. Unless the AI ​​succeeds in creating talking holograms (they are already learning to fake someone’s voice ), the safest option would be to call your contact person directly or meet face to face to confirm the validity of the request.

More…

Leave a Reply