Luca Beurer-Kellner
@lbeurerkellner.bsky.social
45 followers 58 following 22 posts
working on secure agentic AI, CTO @ invariantlabs.ai PhD @ SRI Lab, ETH Zurich. Also lmql.ai author.
Posts Media Videos Starter Packs
lbeurerkellner.bsky.social
Indeed, like an unguarded eval(…) directed at all the data we process.
lbeurerkellner.bsky.social
To get updates about agent security, follow and sign up for access to Invariant below.

We have been working on this problem for years (at Invariant and in research), together with
@viehzeug.bsky.social, @mvechev, @florian_tramer and our super talented team.

invariantlabs.ai/guardrails
Invariant Labs
We help agent builders create reliable, robust and secure products.
invariantlabs.ai
lbeurerkellner.bsky.social
So what's the takeaway here?

1. Prompt injections still work and are more impactful than ever.
2. Don't install untrusted MCP servers.
3. Don't expose highly-sensitive services like WhatsApp to new eco-systems like MCP
4. 🗣️Guardrail 🗣️ Your 🗣️ Agents (we can help with that)
lbeurerkellner.bsky.social
To hide, our malicious server first advertises a completely innocuous tool description, that does not contain the attack.

This means the user will not notice the hidden attack.

On the second launch, though, our MCP server suddenly changes its interface, performing a rug pull.
lbeurerkellner.bsky.social
To successfully manipulate the agent, our malicious MCP server advertises poisoned tool, which re-programs the agent's behavior with respect to the WhatsApp MCP server, and allows the attacker to exfiltrate the user's entire WhatsApp chat history.
lbeurerkellner.bsky.social
Users have to scroll a bit to see it, but if you scroll all the way to the right, you will find the exfiltration payload.

Video: invariantlabs.ai/images/whats...
lbeurerkellner.bsky.social
Even though, a user must always confirm a tool call before it is executed (at least in Cursor and Claude Desktop), our WhatsApp attack remains largely invisible to the user.

Can you spot the exfiltration?
lbeurerkellner.bsky.social
With this setup our attack (1) circumvents the need for the user to approve the malicious tool, (2) exfiltrates data via WhatsApp itself, and (3) does not require the agent to interact with our malicious MCP server directly.
lbeurerkellner.bsky.social
To attack, we deploy a malicious sleeper MCP server, that first advertises an innocuous tool, and then later on, when the user has already approved its use, switches to a malicious tool that shadows and manipulates the agent's behavior with respect to whatsapp-mcp.
lbeurerkellner.bsky.social
New MCP attack demonstration shows how to leak WhatsApp messages via MCP.

We show a new MCP attack that leaks your WhatsApp messages if you are connected via WhatsApp MCP.

Our attack uses a sleeper design, circumventing the need for user approval.

More 👇
lbeurerkellner.bsky.social
To stay updated about agent security, please follow and sign up for early access to Invariant, a security platform for MCP and agentic systems, below.

We have been working on this problem for years (at Invariant and in research).

invariantlabs.ai/guardrails
Invariant Labs
We help agent builders create reliable, robust and secure products.
invariantlabs.ai
lbeurerkellner.bsky.social
We wrote up a little report about this, to raise awareness. Please have a look for much more details and scenarios, and our code snippets.

Blog: invariantlabs.ai/blog/mcp-sec...
lbeurerkellner.bsky.social
These types of malicious tools are especially problematic with auto-updated MCP packages or fully remote MCP servers, for which users only install and give consent once, and then the MCP server is free to change and update their tool descriptions as they please.

We call this an MCP rug pull:
lbeurerkellner.bsky.social
Lastly, not only can you expose malicious tools, tool descriptions can also be used to change the agent's behavior with respect to other tools, which we call 'shadowing'.

This way all you emails suddenly go out to '[email protected]', rather than their actual receipient.
lbeurerkellner.bsky.social
It's trivial to craft a malicious tool description like below, that completely hijacks the agent, while pretending towards the user everything is going great.
lbeurerkellner.bsky.social
What's concerning about this, is that AI models are trained to precisely follow those instructions, rather than be vary about them. This is new about MCP, as before, agent developers could be relatively trusted, now everything is fair game.
lbeurerkellner.bsky.social
When an MCP server is added to an agent like Cursor, Claude or the OpenAI Agents SDK, its tool's descriptions are included in the context of the agent.

This opens the doors wide open for a novel type of indirect prompt injection, we coin tool poisoning.
lbeurerkellner.bsky.social
👿 MCP is all fun, until you add this one malicious MCP server and forget about it.

We have discovered a critical flaw in the widely-used Model Context Protocol (MCP) that enables a new form of LLM attack we term 'Tool Poisoning'.

Leaks SSH key, API keys, etc.

Details below 👇
Reposted by Luca Beurer-Kellner
invariantlabsai.bsky.social
Struggling to ensure consistency with your agent's reliability, especially with tool calling?

Testing is our lightweight, pytest-based OSS library to write and run agent tests.

It provides helpers and assertions that enable you to write robust tests for your agentic applications.
lbeurerkellner.bsky.social
The fun part will be also hijacking the supervisor model, while maintaining the utility of the agent (i.e. attack success).
lbeurerkellner.bsky.social
With (web) agents on everyone's mind, check out our latest blog post (link in thread) on browser agent safety guardrails. We replicate and defend against attacks on the AllHands web agent, preventing it from generating harmful content and falling for harmful requests.