Your AI-Powered Browser May Be Vulnerable to ‘fast Injection’ Attacks

Did you know that you can customize Google to filter out unwanted results? Follow these steps to improve your search results, including adding Lifehacker as your preferred source for tech news .

AI continues to take over more of our daily lives: Anthropic recently announced a Chrome extension that allows Claude AI to see browser activity and take actions on behalf of users, while Perplexity’s Comet is an AI-powered browser that the company calls both a “personal assistant” and a “thinking partner.”

Agent browsers can do a lot of things for you, like scheduling meetings, responding to emails, and ordering DoorDash, but handing over all that control (and personal information) to AI comes with potential security risks. One is an instant injection attack, which allows hackers to trick the AI ​​into following their instructions instead of yours.

You may also like

What is a flash injection attack?

An instant injection attack occurs when hackers disguise malicious input to AI as legitimate, causing generative models to trick them into revealing sensitive data or performing malicious actions.

As IBM describes , large language models (LLMs) are given sets of instructions—system prompts—to process user input. These two elements are combined into a single command written in natural language. This means that LLMs cannot distinguish which part of a command is a system prompt and which comes from the user. If attackers create input that is similar enough to a system prompt, it can replace the developer’s legitimate instructions and trick LLMs into following the fake ones.

In practice, this could mean hiding malicious prompts on a web page that LLM is likely to read to perform an action. The content, which could be plain text or embedded in an image or PDF, could appear harmless or be invisible to users (e.g., white text on a white background). Hackers don’t need code to perform a prompt injection attack — just the right words in the right place.

How Fast Injection Compromises Agent Browsers

While AI-integrated browsers still require some manual input to complete tasks, agent browsers act more like autonomous assistants that can perform all workflows without user approval. This means there is no guarantee of human verification before the AI ​​potentially shares your information, runs malware, or spends money on a fraudulent purchase.

What do you think at the moment?

Example from Malwarebytes Labs : You ask your browser agent to find and book the cheapest flight for your next vacation. If it has all the passenger and payment information (because you provided it), the AI ​​can fulfill this request without any additional action on your part. But if the cheapest flight is found on a malicious site created specifically for this purpose, the browser may transmit your credit card number and other sensitive data directly to the scammers.

A recent report from researchers at Brave (which has its own AI assistant) expressed particular concern about Perplexity’s Comet: tests show that the browser agent is vulnerable to instant code injection attacks and has yet to fix the issue. Anthropic, for its part, acknowledged the vulnerabilities and noted that it is working on security measures to minimize them.

How to use agent browsers safely

Mitigating the risks of fast injection attacks falls largely on agent browser developers rather than users, with security experts recommending higher standards of user interaction and distinguishing between user requests and other content used to complete a task.

However, while Perplexity, Anthropic, and others are addressing these issues, you can put up barriers to immediate injections, such as restricting access to your browser agent data and accounts and requiring manual control for important tasks like authorizing payments. Malwarebytes Labs also recommends enabling multifactor authentication for all accounts connected to browser agents, regularly checking account and browser activity, and updating your software to stay current on security vulnerabilities.

More…

Leave a Reply